Last active
March 18, 2016 21:25
-
-
Save kergoth/164598 to your computer and use it in GitHub Desktop.
Obsolete, superceded by sstate
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
python () { | |
from bb.data import expand | |
# Compatibility | |
for k in d.keys(): | |
if d.getVarFlag(k, "task"): | |
deps = expand(d.getVarFlag(k, "depends") or "", d) | |
if deps: | |
d.setVarFlag(k, "depends", deps.replace(":do_populate_staging", ":do_capture")) | |
d.setVarFlag("do_populate_staging", "deps", []) | |
d.delVarFlag("do_configure", "deptask") | |
d.delVarFlag("do_setscene", "selfstamp") | |
} | |
ASSUME_PROVIDED += "stagemanager-native" | |
python do_populate_staging_local () { | |
pass | |
} | |
python do_setscene_local () { | |
pass | |
} | |
do_stage_local () { | |
: | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
python () { | |
from bb.data import expand | |
# Compatibility | |
for k in d.keys(): | |
if d.getVarFlag(k, "task"): | |
deps = expand(d.getVarFlag(k, "depends") or "", d) | |
if deps: | |
d.setVarFlag(k, "depends", deps.replace(":do_populate_staging", ":do_capture")) | |
d.setVarFlag("do_populate_staging", "deps", []) | |
d.delVarFlag("do_configure", "deptask") | |
d.delVarFlag("do_setscene", "selfstamp") | |
} | |
ASSUME_PROVIDED += "stagemanager-native" | |
python do_populate_staging_local () { | |
pass | |
} | |
python do_setscene_local () { | |
pass | |
} | |
do_stage_local () { | |
: | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Complete rework of the OpenEmbedded staging mechanism | |
# | |
# The primary output of a recipe "build" (where build is defined as the | |
# artifacts produced by the do_compile task and installed into filesystem | |
# layout by do_install) is a set of artifacts captured into an archive. The | |
# form of this archive may vary, but currently it's a tarball. All subsequent | |
# tasks will be made to use this archive, including packaging. A new task is | |
# added which runs prior to do_configure that sets up a private staging area, | |
# using the archives of its build & runtime dependencies. | |
# | |
# Next steps: | |
# - Add callbacks for mangling things like .la files in staging, or for | |
# recipes to inject bits to mangle it. A good example is the libc.so linker | |
# script from glibc. | |
# - Integration work | |
# - Figure out how to keep this relatively independent of the current stuff, | |
# as a transition mechanism for people on old versions of BitBake. | |
# - e1464b0c0fa9ca60be0a1209f32978639f633c8d may be unnecessary, ask RP. | |
# - Look into either using the -dev packages rather than captured install | |
# output (meh), or at least manually grab just the development bits, or rip | |
# out docs and such. We may just want a FILES-like variable containing | |
# space separated paths/globs, for the bits we want to include in staging. | |
# Then we alter the staging population to only pull the bits we want out of | |
# the archives. | |
# - Find better names for this stuff. archive, depstaging, captureinstall, | |
# etc. Also clean up the code a bit. May want to go OO, we're awfully | |
# functional in methodology at the moment, since I built from the bottom up. | |
# - Revamp the install capturing to be more generally useful, for the future. | |
# - Deal with native/cross prefix issues. Not all of these projects are | |
# relocatable, which will cause problems given how I'm now doing this. | |
# Either: | |
# - Go back to forcing prefix, etc inside of staging, and make the | |
# stage_archives() function compensate. | |
# - Make all of the native and cross projects relocatable, and add sanity | |
# checks for recipes or classes overriding the prefix, etc to locations | |
# other than those set in the configuration metadata. | |
# - Move the 'NOTE: Resolving any missing task queue dependencies' message out | |
# of taskdata, into the runqueue/cooker. | |
# - Measure performance with the old and new staging implementations. The new | |
# one will almost certainly be slower, but the question is how statistically | |
# significant a performance loss is it. | |
# - If it's an issue, we may want to try caching the populated staging | |
# areas. We could cache them in either archives or unpacked trees in | |
# TMPDIR. If unpacked, either a separate tree outside of the workdirs, or | |
# make the first recipe to need that combination store its staging area in | |
# the PersistData database, indexed by the staging idents of everything in | |
# that particular staging area. | |
# - Another possibility is to capture in a different way, or populate in a | |
# different way. If we unpacked every captured bit once, and populate | |
# staging as a tree of symlinks to those, that'd speed things up. | |
# | |
# Add to BitBake TODO: | |
# - We need something like the 'check' flag, but higher up, at the stamp | |
# checking level. | |
# Without this, there's no good way to force a rebuild if a recipe's archive | |
# vanishes. Ideally, we'd be able to consider the build tasks as needing | |
# execution if the archive does not exist. | |
# | |
# NOTE: We may want to capture task input/output via a git repository or | |
# repositories rather than tarballs, or use packages instead of tarballs, or | |
# at least manually add file conflict checking. | |
# | |
# Do we have a way to flag a task such that it runs always, like nostamp, but | |
# whose execution doesn't result in re-executing everything that depends upon | |
# it? Such a thing may be ideal for prep work. | |
def get_deps(d): | |
return d.getVar("DEPENDS", True).split() | |
def get_rdeps(d): | |
rdeps = (d.getVar("RDEPENDS", True) or "").split() | |
packages = (d.getVar("PACKAGES", True) or "").split() | |
if packages: | |
rdeps_pkgs = filter([d.getVar("RDEPENDS_%s" % pkg, True) for pkg in packages]) | |
rdeps.extend((rdep for rdep in rdeps_pkgs if rdep)) | |
return rdeps | |
def __get_fns(cachedata, providers, rproviders, d): | |
from bb.taskdata import TaskData | |
taskdata = TaskData(True, False) | |
for provider in providers: | |
taskdata.add_provider(d, cachedata, provider) | |
for rprovider in rproviders: | |
taskdata.add_rprovider(d, cachedata, rprovider) | |
fns = [] | |
for target in taskdata.build_targets: | |
fnid = taskdata.build_targets[target][0] # first provider | |
fns.append(taskdata.fn_index[fnid]) | |
for target in taskdata.run_targets: | |
fnid = taskdata.run_targets[target][0] # first provider | |
fns.append(taskdata.fn_index[fnid]) | |
return fns | |
def get_depfns(cachedata, d): | |
from os.path import isabs | |
from itertools import chain | |
from bb import which | |
from bb.utils import explode_deps | |
from bb.data import inherits_class as inherits | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
if inherits("native", d) or inherits("cross", d): | |
runtime_pkgs = chain(cachedata.rundeps[fn].iteritems(), | |
cachedata.runrecs[fn].iteritems()) | |
exploded = (explode_deps(depstr) for (package, depstr) in runtime_pkgs) | |
packages = d.getVar("PACKAGES", True).split() | |
rdepends = (rdep for rdep in chain(*exploded) if not rdep in packages) | |
else: | |
rdepends = () | |
return __get_fns(cachedata, cachedata.deps[fn], rdepends, d) | |
def get_taskdeps(task, d): | |
from bb.data import expand | |
taskdeps = expand(d.getVarFlag(task, "depends") or "", d) | |
splitdeps = (dep.split(":") for dep in taskdeps.split()) | |
specificdeps = (provider for (provider, task) in splitdeps | |
if task == "do_capture") | |
return set(specificdeps) | |
def get_stagingident(d): | |
from bb.data import expand, inherits_class | |
ident = "${PF}-${MULTIMACH_HOST_SYS}" | |
if inherits_class("cross", d): | |
ident += ".${TARGET_SYS}" | |
return expand(ident, d) | |
def gen_archivepath(d): | |
""" Given a datastore, return the archive path. """ | |
from os.path import join | |
return join(d.getVar("ARCHIVE_DIR", True), "%s.tar.gz" % get_stagingident(d)) | |
def get_archivepath(fn, d): | |
""" Given a filename, return the archive path. """ | |
from bb.persist_data import PersistData | |
persist = PersistData(d) | |
persist.addDomain("staging_archivepath") | |
return persist.getValue("staging_archivepath", fn) | |
def store_archivepath(fn, d): | |
from bb.persist_data import PersistData | |
persist = PersistData(d) | |
persist.addDomain("staging_archivepath") | |
persist.setValue("staging_archivepath", fn, gen_archivepath(d)) | |
def get_staged_archives(d): | |
from bb.persist_data import PersistData | |
stagingdir = d.getVar("STAGING_DIR", True) | |
persist = PersistData(d) | |
persist.addDomain("staging_populated") | |
return (persist.getValue("staging_populated", stagingdir) or "").split() | |
def store_staged_archives(archives, d): | |
from bb.persist_data import PersistData | |
stagingdir = d.getVar("STAGING_DIR", True) | |
persist = PersistData(d) | |
persist.addDomain("staging_populated") | |
persist.setValue("staging_populated", stagingdir, " ".join(archives)) | |
def stage_archives(fns, d): | |
from tarfile import TarFile | |
from os import unlink | |
from os.path import isabs, join, exists | |
from fcntl import lockf, LOCK_EX, LOCK_UN | |
from bb import note, mkdirhier | |
from bb.build import exec_func | |
stagingdir = d.getVar("STAGING_DIR", True) | |
mkdirhier(stagingdir) | |
lockfilename = join(stagingdir, ".lock") | |
if not exists(lockfilename): | |
open(lockfilename, "w").close() | |
lockfile = open(lockfilename, "r+") | |
lockfd = lockfile.fileno() | |
lockf(lockfd, LOCK_EX) | |
staged = get_staged_archives(d) | |
tostage = [fn for fn in fns if fn not in staged] | |
if tostage: | |
for fn in tostage: | |
archivepath = get_archivepath(fn, d) | |
archive = TarFile.gzopen(archivepath) | |
archive.extractall(stagingdir) | |
archive.close() | |
mangle = join(stagingdir, "mangle.sh") | |
if exists(mangle): | |
from bb import note | |
note("Running %s" % mangle) | |
call(["sh", mangle]) | |
unlink(mangle) | |
store_staged_archives(staged + tostage, d) | |
lockf(lockfd, LOCK_UN) | |
lockfile.close() | |
# Stage bits individual tasks need | |
python task_depstaging() { | |
from bb.build import TaskStarted | |
if isinstance(e, TaskStarted): | |
cooker = e.data.getVar("__COOKER", False) | |
deps = get_taskdeps(e.task, e.data) | |
if deps: | |
fns = __get_fns(cooker.status, deps, (), e.data) | |
stage_archives(fns, e.data) | |
} | |
addhandler task_depstaging | |
# Stage deps needed to build | |
python do_depstaging () { | |
cooker = d.getVar("__COOKER", False) | |
fns = get_depfns(cooker.status, d) | |
if fns: | |
stage_archives(fns, d) | |
} | |
do_depstaging[deptask] = "do_capture" | |
addtask depstaging before do_configure | |
def get_recipe_type(d): | |
from bb.data import inherits_class as inherits | |
if inherits("cross", d): | |
return "cross" | |
elif inherits("sdk", d): | |
return "sdk" | |
elif inherits("native", d): | |
return "native" | |
else: | |
return "target" | |
TYPE = "${@get_recipe_type(d)}" | |
BASE_D = "${WORKDIR}/output" | |
D = "${BASE_D}/${TYPE}" | |
def captureinstall(archivename, d): | |
""" Perform an install and capture the output into an archive. """ | |
import bb | |
from tarfile import TarFile | |
from os import curdir, chdir | |
from os.path import join | |
installdest = d.getVar("BASE_D", True) | |
# Do the install | |
bb.build.exec_func("do_install", d) | |
# Capture the output | |
chdir(installdest) | |
archive = TarFile.gzopen(archivename, mode="w") | |
archive.add(curdir) | |
mangle = d.getVar("staging_mangle", True) | |
if mangle: | |
manglefn = join(installdest, "mangle.sh") | |
manglescript = open(manglefn, "w") | |
manglescript.write(mangle) | |
manglescript.close() | |
return archive | |
python do_capture () { | |
import bb | |
from os.path import dirname, realpath, isabs | |
from shutil import rmtree | |
try: | |
archivepath = gen_archivepath(d) | |
bb.mkdirhier(dirname(archivepath)) | |
archive = captureinstall(archivepath, d) | |
except Exception, e: | |
raise bb.build.FuncFailed("Failed to archive the install: %s" % e) | |
else: | |
archive.close() | |
# We won't be needing what's currently in the workdir. Subsequent tasks | |
# may create it again for their own work, but we're done with it. | |
# Don't want to inadvertantly wipe sources that weren't "unpacked". | |
sourcedir = d.getVar("S", True) | |
workdir = d.getVar("WORKDIR", True) | |
topdir = d.getVar("TOPDIR", True) | |
if realpath(sourcedir) not in \ | |
(realpath(workdir), realpath(topdir)): | |
rmtree(sourcedir, ignore_errors=True) | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
store_staged_archives((), d) | |
} | |
addtask capture after do_compile | |
do_captureall () { | |
: | |
} | |
do_captureall[nostamp] = "1" | |
do_captureall[recrdeptask] = "do_capture" | |
addtask captureall | |
python do_clean_append () { | |
from os import unlink | |
store_staged_archives((), d) | |
archivepath = gen_archivepath(d) | |
if os.path.exists(archivepath): | |
unlink(archivepath) | |
} | |
python () { | |
from os import unlink | |
from os.path import exists, isabs | |
from bb import which | |
# Store fn->archivepath mapping | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
store_archivepath(fn, d) | |
# Set up tasks to only be executed when the archive does not exist. | |
__builtins__["check_archiveexists"] = lambda func, deps: exists(gen_archivepath(d)) | |
set = __builtins__["set"] | |
def __rec_addcheck(task, d, seen): | |
if task in seen: | |
return | |
for dep in (d.getVarFlag(task, "deps") or ()): | |
__rec_addcheck(dep, d, seen) | |
if task != "do_setscene": | |
d.setVarFlag(task, "check", "check_archiveexists") | |
seen.add(task) | |
def rec_addcheck(task, d): | |
__rec_addcheck(task, d, set()) | |
rec_addcheck("do_capture", d) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Complete rework of the OpenEmbedded staging mechanism | |
# | |
# The primary output of a recipe "build" (where build is defined as the | |
# artifacts produced by the do_compile task and installed into filesystem | |
# layout by do_install) is a set of artifacts captured into an archive. The | |
# form of this archive may vary, but currently it's a tarball. All subsequent | |
# tasks will be made to use this archive, including packaging. A new task is | |
# added which runs prior to do_configure that sets up a private staging area, | |
# using the archives of its build & runtime dependencies. | |
# | |
# Next steps: | |
# - Add callbacks for mangling things like .la files in staging, or for | |
# recipes to inject bits to mangle it. A good example is the libc.so linker | |
# script from glibc. | |
# - Integration work | |
# - Figure out how to keep this relatively independent of the current stuff, | |
# as a transition mechanism for people on old versions of BitBake. | |
# - e1464b0c0fa9ca60be0a1209f32978639f633c8d may be unnecessary, ask RP. | |
# - Look into either using the -dev packages rather than captured install | |
# output (meh), or at least manually grab just the development bits, or rip | |
# out docs and such. We may just want a FILES-like variable containing | |
# space separated paths/globs, for the bits we want to include in staging. | |
# Then we alter the staging population to only pull the bits we want out of | |
# the archives. | |
# - Find better names for this stuff. archive, depstaging, captureinstall, | |
# etc. Also clean up the code a bit. May want to go OO, we're awfully | |
# functional in methodology at the moment, since I built from the bottom up. | |
# - Revamp the install capturing to be more generally useful, for the future. | |
# - Deal with native/cross prefix issues. Not all of these projects are | |
# relocatable, which will cause problems given how I'm now doing this. | |
# Either: | |
# - Go back to forcing prefix, etc inside of staging, and make the | |
# stage_archives() function compensate. | |
# - Make all of the native and cross projects relocatable, and add sanity | |
# checks for recipes or classes overriding the prefix, etc to locations | |
# other than those set in the configuration metadata. | |
# - Move the 'NOTE: Resolving any missing task queue dependencies' message out | |
# of taskdata, into the runqueue/cooker. | |
# - Measure performance with the old and new staging implementations. The new | |
# one will almost certainly be slower, but the question is how statistically | |
# significant a performance loss is it. | |
# - If it's an issue, we may want to try caching the populated staging | |
# areas. We could cache them in either archives or unpacked trees in | |
# TMPDIR. If unpacked, either a separate tree outside of the workdirs, or | |
# make the first recipe to need that combination store its staging area in | |
# the PersistData database, indexed by the staging idents of everything in | |
# that particular staging area. | |
# - Another possibility is to capture in a different way, or populate in a | |
# different way. If we unpacked every captured bit once, and populate | |
# staging as a tree of symlinks to those, that'd speed things up. | |
# | |
# Add to BitBake TODO: | |
# - We need something like the 'check' flag, but higher up, at the stamp | |
# checking level. | |
# Without this, there's no good way to force a rebuild if a recipe's archive | |
# vanishes. Ideally, we'd be able to consider the build tasks as needing | |
# execution if the archive does not exist. | |
# | |
# NOTE: We may want to capture task input/output via a git repository or | |
# repositories rather than tarballs, or use packages instead of tarballs, or | |
# at least manually add file conflict checking. | |
# | |
# Do we have a way to flag a task such that it runs always, like nostamp, but | |
# whose execution doesn't result in re-executing everything that depends upon | |
# it? Such a thing may be ideal for prep work. | |
def get_deps(d): | |
return d.getVar("DEPENDS", True).split() | |
def get_rdeps(d): | |
rdeps = (d.getVar("RDEPENDS", True) or "").split() | |
packages = (d.getVar("PACKAGES", True) or "").split() | |
if packages: | |
rdeps_pkgs = filter([d.getVar("RDEPENDS_%s" % pkg, True) for pkg in packages]) | |
rdeps.extend((rdep for rdep in rdeps_pkgs if rdep)) | |
return rdeps | |
def __get_fns(cachedata, providers, rproviders, d): | |
from bb.taskdata import TaskData | |
taskdata = TaskData(True, False) | |
for provider in providers: | |
taskdata.add_provider(d, cachedata, provider) | |
for rprovider in rproviders: | |
taskdata.add_rprovider(d, cachedata, rprovider) | |
fns = [] | |
for target in taskdata.build_targets: | |
fnid = taskdata.build_targets[target][0] # first provider | |
fns.append(taskdata.fn_index[fnid]) | |
for target in taskdata.run_targets: | |
fnid = taskdata.run_targets[target][0] # first provider | |
fns.append(taskdata.fn_index[fnid]) | |
return fns | |
def get_depfns(cachedata, d): | |
from os.path import isabs | |
from itertools import chain | |
from bb import which | |
from bb.utils import explode_deps | |
from bb.data import inherits_class as inherits | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
if inherits("native", d) or inherits("cross", d): | |
runtime_pkgs = chain(cachedata.rundeps[fn].iteritems(), | |
cachedata.runrecs[fn].iteritems()) | |
exploded = (explode_deps(depstr) for (package, depstr) in runtime_pkgs) | |
packages = d.getVar("PACKAGES", True).split() | |
rdepends = (rdep for rdep in chain(*exploded) if not rdep in packages) | |
else: | |
rdepends = () | |
return __get_fns(cachedata, cachedata.deps[fn], rdepends, d) | |
def get_taskdeps(task, d): | |
from bb.data import expand | |
taskdeps = expand(d.getVarFlag(task, "depends") or "", d) | |
splitdeps = (dep.split(":") for dep in taskdeps.split()) | |
specificdeps = (provider for (provider, task) in splitdeps | |
if task == "do_capture") | |
return set(specificdeps) | |
def get_stagingident(d): | |
from bb.data import expand, inherits_class | |
ident = "${PF}-${MULTIMACH_HOST_SYS}" | |
if inherits_class("cross", d): | |
ident += ".${TARGET_SYS}" | |
return expand(ident, d) | |
def gen_archivepath(d): | |
""" Given a datastore, return the archive path. """ | |
from os.path import join | |
return join(d.getVar("ARCHIVE_DIR", True), "%s.tar.gz" % get_stagingident(d)) | |
def get_archivepath(fn, d): | |
""" Given a filename, return the archive path. """ | |
from bb.persist_data import PersistData | |
persist = PersistData(d) | |
persist.addDomain("staging_archivepath") | |
return persist.getValue("staging_archivepath", fn) | |
def store_archivepath(fn, d): | |
from bb.persist_data import PersistData | |
persist = PersistData(d) | |
persist.addDomain("staging_archivepath") | |
persist.setValue("staging_archivepath", fn, gen_archivepath(d)) | |
def get_staged_archives(d): | |
from bb.persist_data import PersistData | |
stagingdir = d.getVar("STAGING_DIR", True) | |
persist = PersistData(d) | |
persist.addDomain("staging_populated") | |
return (persist.getValue("staging_populated", stagingdir) or "").split() | |
def store_staged_archives(archives, d): | |
from bb.persist_data import PersistData | |
stagingdir = d.getVar("STAGING_DIR", True) | |
persist = PersistData(d) | |
persist.addDomain("staging_populated") | |
persist.setValue("staging_populated", stagingdir, " ".join(archives)) | |
def stage_archives(fns, d): | |
from tarfile import TarFile | |
from os import unlink | |
from os.path import isabs, join, exists | |
from fcntl import lockf, LOCK_EX, LOCK_UN | |
from bb import note, mkdirhier | |
from bb.build import exec_func | |
stagingdir = d.getVar("STAGING_DIR", True) | |
mkdirhier(stagingdir) | |
lockfilename = join(stagingdir, ".lock") | |
if not exists(lockfilename): | |
open(lockfilename, "w").close() | |
lockfile = open(lockfilename, "r+") | |
lockfd = lockfile.fileno() | |
lockf(lockfd, LOCK_EX) | |
staged = get_staged_archives(d) | |
tostage = [fn for fn in fns if fn not in staged] | |
if tostage: | |
for fn in tostage: | |
archivepath = get_archivepath(fn, d) | |
archive = TarFile.gzopen(archivepath) | |
archive.extractall(stagingdir) | |
archive.close() | |
mangle = join(stagingdir, "mangle.sh") | |
if exists(mangle): | |
from bb import note | |
note("Running %s" % mangle) | |
call(["sh", mangle]) | |
unlink(mangle) | |
store_staged_archives(staged + tostage, d) | |
lockf(lockfd, LOCK_UN) | |
lockfile.close() | |
# Stage bits individual tasks need | |
python task_depstaging() { | |
from bb.build import TaskStarted | |
if isinstance(e, TaskStarted): | |
cooker = e.data.getVar("__COOKER", False) | |
deps = get_taskdeps(e.task, e.data) | |
if deps: | |
fns = __get_fns(cooker.status, deps, (), e.data) | |
stage_archives(fns, e.data) | |
} | |
addhandler task_depstaging | |
# Stage deps needed to build | |
python do_depstaging () { | |
cooker = d.getVar("__COOKER", False) | |
fns = get_depfns(cooker.status, d) | |
if fns: | |
stage_archives(fns, d) | |
} | |
do_depstaging[deptask] = "do_capture" | |
addtask depstaging before do_configure | |
def get_recipe_type(d): | |
from bb.data import inherits_class as inherits | |
if inherits("cross", d): | |
return "cross" | |
elif inherits("sdk", d): | |
return "sdk" | |
elif inherits("native", d): | |
return "native" | |
else: | |
return "target" | |
TYPE = "${@get_recipe_type(d)}" | |
BASE_D = "${WORKDIR}/output" | |
D = "${BASE_D}/${TYPE}" | |
def captureinstall(archivename, d): | |
""" Perform an install and capture the output into an archive. """ | |
import bb | |
from tarfile import TarFile | |
from os import curdir, chdir | |
from os.path import join | |
installdest = d.getVar("BASE_D", True) | |
# Do the install | |
bb.build.exec_func("do_install", d) | |
# Capture the output | |
chdir(installdest) | |
archive = TarFile.gzopen(archivename, mode="w") | |
archive.add(curdir) | |
mangle = d.getVar("staging_mangle", True) | |
if mangle: | |
manglefn = join(installdest, "mangle.sh") | |
manglescript = open(manglefn, "w") | |
manglescript.write(mangle) | |
manglescript.close() | |
return archive | |
python do_capture () { | |
import bb | |
from os.path import dirname, realpath, isabs | |
from shutil import rmtree | |
try: | |
archivepath = gen_archivepath(d) | |
bb.mkdirhier(dirname(archivepath)) | |
archive = captureinstall(archivepath, d) | |
except Exception, e: | |
raise bb.build.FuncFailed("Failed to archive the install: %s" % e) | |
else: | |
archive.close() | |
# We won't be needing what's currently in the workdir. Subsequent tasks | |
# may create it again for their own work, but we're done with it. | |
# Don't want to inadvertantly wipe sources that weren't "unpacked". | |
sourcedir = d.getVar("S", True) | |
workdir = d.getVar("WORKDIR", True) | |
topdir = d.getVar("TOPDIR", True) | |
if realpath(sourcedir) not in \ | |
(realpath(workdir), realpath(topdir)): | |
rmtree(sourcedir, ignore_errors=True) | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
store_staged_archives((), d) | |
} | |
addtask capture after do_compile | |
do_captureall () { | |
: | |
} | |
do_captureall[nostamp] = "1" | |
do_captureall[recrdeptask] = "do_capture" | |
addtask captureall | |
python do_clean_append () { | |
from os import unlink | |
store_staged_archives((), d) | |
archivepath = gen_archivepath(d) | |
if os.path.exists(archivepath): | |
unlink(archivepath) | |
} | |
python () { | |
from os import unlink | |
from os.path import exists, isabs | |
from bb import which | |
# Store fn->archivepath mapping | |
fn = d.getVar("FILE", True) | |
if not isabs(fn): | |
fn = which(d.getVar("BBPATH", True), fn) | |
store_archivepath(fn, d) | |
# Set up tasks to only be executed when the archive does not exist. | |
__builtins__["check_archiveexists"] = lambda func, deps: exists(gen_archivepath(d)) | |
set = __builtins__["set"] | |
def __rec_addcheck(task, d, seen): | |
if task in seen: | |
return | |
for dep in (d.getVarFlag(task, "deps") or ()): | |
__rec_addcheck(dep, d, seen) | |
if task != "do_setscene": | |
d.setVarFlag(task, "check", "check_archiveexists") | |
seen.add(task) | |
def rec_addcheck(task, d): | |
__rec_addcheck(task, d, set()) | |
rec_addcheck("do_capture", d) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ARCHIVE_DIR = "${TMPDIR}/archives" | |
STAGING_DIR = "${WORKDIR}/staging" | |
STAGING_BINDIR_CROSS = "${STAGING_DIR}/cross${layout_bindir}" | |
STAGING_DIR_NATIVE = "${STAGING_DIR}/native" | |
STAGING_DIR_HOST = "${STAGING_DIR}/${TYPE}" | |
STAGING_DIR_TARGET = "${STAGING_DIR}/target" | |
STAGING_DIR_SDK = "${STAGING_DIR}/cross" | |
INHERIT += "staging" | |
INHERIT += "staging-hacks" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ARCHIVE_DIR = "${TMPDIR}/archives" | |
STAGING_DIR = "${WORKDIR}/staging" | |
STAGING_BINDIR_CROSS = "${STAGING_DIR}/cross${layout_bindir}" | |
STAGING_DIR_NATIVE = "${STAGING_DIR}/native" | |
STAGING_DIR_HOST = "${STAGING_DIR}/${TYPE}" | |
STAGING_DIR_TARGET = "${STAGING_DIR}/target" | |
STAGING_DIR_SDK = "${STAGING_DIR}/cross" | |
INHERIT += "staging" | |
INHERIT += "staging-hacks" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment