IRC logs for #buildstream for Wednesday, 2017-06-14

*** juergbi has quit IRC02:11
*** juergbi has joined #buildstream02:11
*** tristan has joined #buildstream05:38
*** ChanServ sets mode: +o tristan06:51
*** jude has joined #buildstream07:03
*** tristanmaat has joined #buildstream08:22
*** tristanmaat has quit IRC08:24
*** tristanmaat has joined #buildstream08:25
*** jonathanmaw has joined #buildstream08:25
*** tristanmaat has quit IRC08:26
*** tristanmaat has joined #buildstream08:26
*** tiagogomes has quit IRC08:46
*** tiagogomes has joined #buildstream08:46
tristanjonathanmaw, you have been building build-gnome branch right ?09:01
tristanjonathanmaw, with your MR applied which I forgot to apply until now ?09:01
jonathanmawtristan: yep09:02
*** ssam2 has joined #buildstream09:02
tristanjonathanmaw, I'm having trouble with stage2-make09:02
tristanit's telling me it wants aclocal-1.14 and it doesnt exist09:02
tristanwhile clearly, aclocal-1.15 is *right there*09:02
tristanmaybe it's my changes...09:03
ssam2this might not be relevant, but stage2-make is probably building a tarball committed to git09:04
juergbitristan: did you manage to get artifact sharing working with my instructions? any feedback from your side?09:04
ssam2actually might be a timestamps issue09:04
tristanjuergbi, unfortunately... it's been a bit of a messy week and I didnt get there :'(09:04
tristanssam2, yeah was thinking that, I have a thing which sets the mtime across the board already but maybe I fudged it ?09:04
tristananyway lemme investigate09:05
juergbiok, no problem09:05
tristanjuergbi, last week was a mess of ostree investigations09:06
tristanjuergbi, which concluded in my implementation of a 'copy-on-write' hardlink fuse layer, and a stronger game plan for fakeroot-over-fuse: https://gitlab.com/BuildStream/buildstream/issues/3809:07
tristanthe last piece of the puzzle (ostree-wise) is https://github.com/ostreedev/ostree/issues/92509:08
juergbiok, taking a look09:08
tristanWhich colin inappropriately renamed, this is not anything to do with rofiles-fuse09:08
ssam2I guess Colin is still thinking that everyone should switch to read-only /usr09:10
tristanssam2, I dont think so... I think he just missed the point09:12
tristanin fifth comment (since linking to github issue comments is not as easy as bugzilla somehow), he notes:09:13
tristan"That type of thing is definitely part of the original idea; it's really the reason why OstreeRepo and OstreeSysroot are distinct layers."09:13
tristanSo I think this falls into the mission of ostree09:13
tristanwell enough09:13
tristanOk verified I screwed it up, if I back up in time to pre-copy-on-write hardlinks, stage2-make builds09:14
tristanSo I have to think about how I fudged it up09:14
tristanAh, that was fast :)09:18
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 1 commit (last: element.py: Fix regression with setting deterministic mtime on source stages) https://gitlab.com/BuildStream/buildstream/commit/82523346be14496b1aa10f9d45ddced34b53233c09:21
tristanjuergbi, if you're interested, there was also a thread here https://mail.gnome.org/archives/ostree-list/2017-June/msg00011.html, but I think it's largely beside the point now that I've figured things out a bit more09:44
tristanjuergbi, so we went through a few stages with ostree... the first was assigning all files uid/gid 0 and user mode checkouts09:45
tristanjuergbi, *but* there was a couple of bugs which got fixed since in ostree, one of them included executable bits being completely lost, *unless* the special uncompressed-object-cache was explicitly in use at checkout time (which wasnt obvious)09:45
tristanthat was a regression we didnt notice, but caused the gitlab runners to malfunction (and anyone with a bleeding edge ostree)09:46
tristanjuergbi, then... I needed to hack around the hardlinks getting corrupted after doing the GNOME auto-conversions, because we need to run a mutation script which does `dpkg --configure -a` on an imported debian sysroot09:47
tristanSo I hacked around that and made the artifact cache use archive-z2, which temporarily swept that under the rug09:47
tristanIn the meantime, there was another ostree bug, which was (since forever), user mode checkouts are always "(mode | 0755) &= ~(suid|gid)"09:48
tristans/gid/sgid09:48
tristanSo now, after talking about this on the above thread, we agreed that if we cannot have arbitrary uid/gid, then it's pointless to argue for an insecure case of allowing the build user to own an suid/sgid file, but that bug has been fixed so that user mode checkouts are always "mode &= ~(suid|sgid)"09:50
tristanAfter this, I did copy-on-write fuse layer, and reverted the artifact cache back to it's initial form09:50
tristanSo09:50
tristan  o Files always committed uid/gid 009:50
tristan  o Files always checked out with original permissions, builder user ownership, and no suid/sgid09:51
tristan  o Files written to in the staging area become copies automatically, no artifact cache corruption09:51
tristanjuergbi, that is present state of artifact cache09:51
juergbiok, thanks for the summary09:52
tristanAddressing issue #38 (which I filed like, yesterday)... will do fakeroot-over-fuse and cooperate with the artifact cache, and allow everything you ever wanted (xattrs, arbitrary UID/GIDs, SUID) in the build sandbox and build outputs09:52
tristanAlso it can guarantee that the user never actually owns an suid 'sudo' anywhere09:53
tristanbut for now, without that we can still produce relatively sane system images09:53
tristanthey just dont have arbitrary uid/gid ownerships, and lack suid things09:54
tristan(or xattrs, but those are fairly rare too)09:54
tristanAnyway, as you can see, it was a wild week :)09:54
juergbi:)09:54
tristanjuergbi, regarding the artifact cache sharing, which I had not had time to look at unfortunately... I have some things I'm curious about :)09:55
tristanA.) When a build can be downloaded, are we able to eliminate it's build dependencies from the pipeline ?09:56
tristanB.) Even more fun... can we do it in `bst build --track` mode ?09:56
tristanhehehe09:56
juergbiA) not right now, might be possible once/if we use summary file where we could check availability early on (although this could be problematic in case connection to artifact server fails in the middle of bst run)10:00
tristanThis is very interesting actually, in --track mode it means... I have not seen this commit sha of systemd before, but someone else may have already built it... after getting the latest sha; element moves on to the artifact pull queue... but once it gets there; can we say that; we already have a systemd artifact available, so there is no need to process, at all; any elements which are build-only dependencies for it (as long as not referenced by10:01
tristananything else we do need to build)10:01
tristanjuergbi, also, I have a feeling that we can query the ostree for what refs it has without using a summary file... but I might be wrong10:02
tristanI think a very acceptable approach though, is to do that query once only at the beginning of the build10:02
tristan(like, lets not care that someone built it before us, in the middle of a build, or at least lets not try to recalculate build plans in that case)10:03
juergbivia sshfs we could query using normal filesystem operations without summary file, however, that might be one (remote) check for each artifact (given they are all in different directories, unless SFTP supports something like recursive listings)10:04
juergbifor http-only servers, one HEAD request per artifact could work10:06
juergbipipelining should theoretically be possible in the HTTP case10:06
tristanjuergbi, hmmm one sec...10:14
tristanjuergbi, :)10:17
tristanThis command over http gave me a list:10:17
tristanostree remote refs --repo ~/.cache/buildstream/sources/ostree/https___sdk_gnome_org_repo_/ origin10:17
tristanjuergbi, so assuming that the artifact cache is setup to have a remote setup for the artifact share, some ostree API can give us that, and I dont think it requires a summary file be present in the remote10:18
juergbitristan: have you confirmed that this list actually includes all remote refs and not just the ones that have been pulled?10:21
juergbiit looks like a local command to me10:21
ssam2tristanmaat, there's a couple of ways of making the size of the bootstrap tarball more managable10:22
ssam2tristanmaat, one is, we don't actually need all the components that are used in a normal build. The Baserock definitions have a separate stratum named 'cross-bootstrap' (maybe not the best name) which contained just enough to run Morph: http://git.baserock.org/cgit/baserock/baserock/definitions.git/tree/strata/cross-bootstrap.morph10:23
ssam2tristanmaat, I think we'll need something similar here. But I think you should focus on getting the command working first10:24
ssam2tristanmaat, so I suggest writing a testcase that just tries to produce a source bundle for a couple of small components (let's say fhs-dirs and busybox perhaps ...)10:24
ssam2tristanmaat, make the `source-bundle` command produce a bundle that you can build on your laptop10:25
tristanmaatOk, yeah, that sounds reasonable.10:25
juergbitristan: it lists all refs that are locally stored in refs/remotes. however, have to check how that directory is updated10:25
juergbii suspect it's only for actually pulled refs10:25
tristanmaatShould I merge changes to the main repo into it or continue on your private copy? Or just restart on the main repo?10:25
tristanmaat*from the main repo10:25
tristanjuergbi, oh... sure about that ? strange10:26
ssam2tristanmaat, consider it a github style workflow. So make your own personal fork to work in10:26
ssam2tristanmaat, when it's done you can send a merge request to the main repo10:26
juergbitristan: i'll have to do some more digging10:27
ssam2tristanmaat, and start from my changes if you like, but you may as well pull them into your fork. Makes more sense than me giving you access to my personal fork.10:27
tristanOk so ssam2, tristanmaat... what's going on here I'm a bit lost; are we going to put the parse validation stuff aside for now ?10:30
tristanThat is fine with me fwiw10:31
ssam2i don't mind either way. Probably better to move onto the source-bundle work if tristanmaat feels ready though10:32
tristanmaatYeah, we are. I'm starting to work on the tar bundle command, until either I finish or this project is more urgent.10:32
tristanmaatLaurence told me to, so I'll just do that.10:32
tristantristanmaat, yeah ok that is fine10:34
tristanBut I want to know what's going on before something random comes in the form of an MR, for this source bundle thing10:34
ssam2in how much detail ?10:35
tristanssam2, tristanmaat so what is the output of this, first of all ? Is it a directory with staged source code in many element named subdirectories, plus a main script and many element specific scripts ?10:36
ssam2my thought is that yeah, it should generate exactly that10:36
tristanssam2, in enough detail that I have an idea what's going on, because I need to know it makes sense before I start making demands for expensive changes on your teams behalf10:36
ssam2sure... my issue here is that I find prototyping to be the only way of working out how things should work10:37
tristanSo for what is the Sources, this can be done with current APIs10:37
tristanssam2, except you have me available and I have a really good idea of what can go wrong :)10:37
tristanso I will save you some time10:37
* tristan is building an image so computer is intermittently locking up...10:38
tristanmaattristan, I'll be working on this particular command mostly10:39
ssam2i did an intial hack here: https://gitlab.com/samthursfield/buildstream/tree/sam/source-bundle10:39
ssam2which indeed, is able to stage the sources without much hassle10:39
tristanOk so... the command will have to run a pipeline in the case that the sources need to be tracked or fetched10:40
tristanThat can be determined before trying to stage the sources10:40
tristanso only optionally10:40
tristanBut the tricky part is going to be serializing scripts10:40
ssam2right. is track useful here? I was wondering if we could ignore it, since this is just for bootstrapping10:40
tristanssam2, if we have it for build, we should have it there for consistency, however it should not take more than an hour really, assuming everything else is done10:41
ssam2ok10:41
tristanI.e. we already have it for build, the code is there, it's just a matter of constructing the scheduler queues with Track in advance of Fetch10:41
ssam2and doing twice as much testing10:42
tristanAlso, note that this is certainly *not* just for bootstrapping10:42
ssam2ah, ok10:42
ssam2what are the other use cases?10:42
tristanIn fact, this is only for *after* bootstrapping, and before the target host is capable of running buildstream10:42
tristanthis is the low level middleware10:42
tristanRight ?10:42
tristanIt *can* contain a bootstrap10:42
ssam2hoping to avoid another debate about the meaning of "bootstrap", so OK10:43
tristanbut the point is to be able to create a script bundle which can build all of the buildstream dependencies10:43
ssam2it's for producing a system that can run BuildStream in the absense of existing native host tools10:43
ssam2and a cross sandbox10:43
tristanCorrect, ok so there is one "gotcha" I can see10:43
tristanThe target system which will run this, will of course need to have some base tools and need to have chroot I guess10:44
ssam2I think busybox can provide that if needed10:44
tristanSo I *guess* that every build on that target happens basically like in BuildStream but as root in a chroot ?10:44
tristanOr, do you install everything to /usr ?10:44
tristanYeah that's not the gotcha10:44
tristanThe main gotcha: Not every buildstream element can support this10:45
ssam2in Morph, seems what happens is that it runs the normal install-commands10:45
tristanssam2, So probably at the beginning of running this `bst source-bundle` command, we need to check the elements in the pipeline for support10:45
ssam2right10:45
tristanssam2, in BuildStream, not every element is a BuildElement10:45
ssam2ImportElement in particular is not wanted, and should actually cause an error since it'd be trying to import host tools that don't exist10:46
tristanssam2, which also means you need some sort of special semantic for eliminating parts of the pipeline you load10:46
tristanMaybe something like --start-from10:46
tristanAnything below --start-from gets cut off of the pipeline ?10:47
ssam2hmm, that could work10:47
ssam2I was thinking we'd need a separate stack just for cross-bootstrap. Since we also need to avoid having 8 copies of gcc.git if possible10:47
ssam2but, --start-from would allow that10:47
ssam2and it'd be very nice to avoid having a duplicate stack10:47
ssam2so yeah I like that10:47
tristanNot sure it does for every pipeline, we could have multiple imports for various build-only dependencies, so --start-from might have to be something like --sever10:47
tristanAnd can be repeated... bst source-bundle --server elementa.bst --sever elementb.bst10:48
ssam2ah right, because for example an element for Rust might try to pull in a binary blob of a precompiled Rust compiler10:48
tristanSomething like that could happen10:49
ssam2OK10:49
tristanIt's only a matter of fitting it into how buildstream works10:49
tristansince it can happen, better to allow severing of multiple dependency chains10:49
tristanSo probably, this is only supported by BuildElement and Stack10:50
tristanAnd the base generated script has a way of running these10:50
tristanssam2, anyway, I think that's enough... just wanted to stick my nose in and have an idea of what's going on10:50
tristanI'm satisfied :)10:50
ssam2its been helpful :-)10:51
tristanssam2, not sure how morph did it, but probably by substing the install-root to '/', you could have a serialized build model which installs everything directly into the same chroot dir10:54
ssam2seems what Morph did was install to /%{name}.inst10:54
tristanthen maybe you could move a symlink from /buildstream/build -> real build in between builds or such10:54
ssam2then run `(cd /%{name}.inst; find . | cpio -umdp /)`10:54
ssam2ugly, but I guess it worked10:55
tristanmyeah10:55
ssam2it didn't use a chroot at all10:55
tristanSo it builds directly onto the slash10:55
ssam2yeah10:55
tristanyeah that can also be fine for this purpose I guess10:55
tristanstill I would recommend just setting install-root to / when doing substitutions on build elements10:56
ssam2yeah, same effect but simpler10:56
tristanthe only problem I could see is badly written buildscripts choking on DESTDIR=/10:57
tristanbut I doubt it10:57
tristanssam2, regarding the chroot... I *think* that actually, if someone wanted to try it... they could always manually put build-essential into a directory, chroot into that, and run the source-bundle output directly in there10:58
tristanso I guess installing directly to / is interesting in that light also10:58
ssam2for testing, you mean ?10:59
tristanYeah10:59
tristanAlso, I *think* you want to sever build-essential (or 'gnu-toolchain') from the source-bundle anyway10:59
ssam2very much so11:00
tristanI.e. you will always need a kernel and a cross-built runtime placed onto the host anyway11:00
tristanI guess you are working on that part while tristanmaat looks at the other11:00
ssam2yeah, I'm looking at cross-building a suitable sysroot11:00
ssam2which i'll want to discuss... but probably tomorrow11:01
tristanssam2, so for testing (before cross bootstrap)... tristanmaat can easily test with `bst checkout gnu-toolchain.bst`11:01
tristanand chroot into the checkout11:01
tristanmaatAnd in there run the tarball11:01
tristanmaatCool11:01
tristanright11:01
tristanthe magic self extracting executable tarball :)11:02
tristan(joking)11:02
tristan(but doable)11:02
ssam2i'm struggling to reconcile architecture conditionals with cross compile support11:31
ssam2there are two places in the converted Baserock definitions that use architecture conditionals11:32
ssam2in the project.conf it sets the compiler target for stage1 and host architecture for stage2, so the 'arches' conditional should follow the cross architecture (target) if set11:32
ssam2but in the base-sdk and base-platform elements, it's used to choose which ref (and which architecture) we pull, and that should always follow the native architecture even when cross compiling11:33
ssam2we could introduce a new 'cross-arches' or 'native-arches' conditional, but seems confusing and a bit ugly11:34
ssam2or we could decide how the 'arches' conditional works based on what type of element we're dealing with, since ImportElement is never going to really have a context of 'cross compiling'. But it seems that _loader.resolve_arch() is called before we know what type of element we're dealing with11:34
ssam2I guess we do know what kind of element we're dealing with by looking at the 'kind' field11:39
tristanssam2, I was having that conundrum when we spoke earlier11:40
tristanssam2, my favorite approach I think, is that there is --host-arch and there is --build-arch, and they both default to --arch11:40
tristanssam2, however it leaves the question, what is --arch in the case that host and target differ11:41
tristansorry, s/--build-arch/--target-arch/11:41
ssam2sure, if 'arches' follows '--arch' there seems to be no solution that would work11:41
ssam2unless we decide based on element type11:41
tristanno ?11:41
ssam2like I said above, the base-sdk and base-platform need --arch to be --host-arch in all cases11:42
tristanWell11:42
ssam2but the stage1 and stage2 need --arch to be --target-arch when cross building, and --host-arch when native building11:42
tristanssam2, I think this may just depend on how you setup your pipeline11:42
tristanI.e. we depend on base-sdk and base-platform to build natively, I think it makes sense that those are the target11:43
ssam2I don't understand what you mean...11:43
tristanssam2, i.e. in the case that we wanted to virtual cross build, the sandbox would have to depend on a host emulator for that target arch11:43
ssam2right, but that's a whole extra codepath that's just not present yet, right?11:44
tristanssam2, Ok so, you are talking about base-sdk and base-platform like those elements are set in stone, but we depend on them for something specific11:44
ssam2yes11:44
tristanssam2, yes, but I want to keep that codepath in mind all the way through11:44
tristanotherwise we wont have something coherent for it11:44
ssam2ok, so you think the host tools should be treated specially somehow?11:44
ssam2that could make sense11:44
tristanI mean that when building things, everything ("arches") is --target11:45
ssam2that's fine apart from ImportElement11:45
tristanSo in a shiney future, we will need to depend on the same base runtimes to be staged in a qemu sandbox for cross target building11:45
tristanHowever I think you are confusing ImportElement and what base-sdk/base-platform *is*, it's just an element we depend on for that project11:46
tristanIt does not mean that if you want to cross build, you have to depend on *that* import element11:46
ssam2well, ok11:46
ssam2is there a way that an element can select its dependencies based on whether or not we're cross building ?11:47
tristanIf we add "host-arches" and the command line/context options we spoke of, then one could distinguish11:47
tristanI think we already discussed that we *need* to have a different arches conditional in order to do this at all11:48
ssam2ah, I was missing that point11:48
ssam2if we can have two types of 'arches' conditional then there's no issue11:48
tristanssam2, So, that would mean in a project with a cross-capable bootstrap, we would have that base import depend on host-arches11:48
tristanExactly, thats certainly necessary11:48
ssam2so 'host-arches' always follows '--host-arch', and 'arches' follows '--target-arch' (which defaults to '--host-arch')11:49
tristanssam2, so I think; we have the desired thing... we have --host-arch and we have --target-arch command line options, and they both default to --arch, which in turn defaults to `uname`11:49
ssam2ok11:50
tristanthen exactly as you said, we have host-arches conditionals (new) and the existing arches conditionals (existing = target)11:50
ssam2as a separate thing, when would '--host-arch' differ from '--arch'?11:51
tristanssam2, I am struggling as to whether we should bite the bullet and deprecate `arches` for a more consistently named `target-arches`11:51
ssam2i'm not sure on that point either11:51
tristanssam2, when you are on a mips system, and you only have an x86 runtime to import, in order to produce an arm system.11:51
tristan:)11:52
ssam2OK. But in that case, what purpose does --arch serve?11:52
tristanIn that case your sandbox will require emulation for both host and target architectures11:52
ssam2you'd have --arch=mips, --host=arch-x86_64, --target-arch=arm11:52
tristanssam2, that should produce an error I think, or a warning11:52
ssam2but specifying --arch=mips is pointless since it represents something that can't be changed (the physical machine architecture)11:52
tristan"WARNING: Meaningless option --arch specified"11:53
tristan<tristan> ssam2, so I think; we have the desired thing... we have --host-arch and we have --target-arch command line options, and they both default to --arch, which in turn defaults to `uname`11:53
ssam2right, I understand that part11:53
tristanssam2, as mentioned above, its only convenience11:53
ssam2my question is why bother with --arch if it can only be used to produce error messages11:53
tristanits not11:53
tristanssam2, the usual case is that you dont need to specify it... the almost usual case is when you have to specify it because arch names are just not consistent across the board (Just because uname happens to report x86_64 on your 64bit intel, does not mean it will produce x86_32 on your actually i686 machine)11:54
tristanssam2, So at some point, what the project calls a given "arch" is not the same as what uname produces, so it makes sense to let the user specify one that their project understands11:55
ssam2wouldn't you use use `--host-arch=x86_32` if you wanted to build x86_32 on an x86_64 machine though ?11:56
tristanNo11:56
ssam2i'm curious how this ties in with the commands other than `bst build`, as well11:56
tristanI'm curious about that too, but I believe the only thing is can effect is cache keys11:56
ssam2i mean, it has use cases. `bst show --arch=armv7l` can be used to show if I have built an armv7l image already11:57
ssam2but that could equally be `bst show --host-arch=armv7l`11:57
tristanssam2, Ok so, in a world where we do have sandboxes that can do emulation... *both* the host and target arches are not necessarily what your actual computer is running11:58
ssam2yes, I understand that11:58
tristanIf you wanted to build x86_32 on an x86_64 machine, you would specify --target-arch=x86_3211:58
tristanBut11:59
tristanIn the case that uname does not report x86_64 on a 64bit intel, then you would have to specify both11:59
ssam2so, it'd build stage1 where --host-arch=x86_64 and --target-arch=x86_3212:02
ssam2then build a stage2 where --host-arch=x86_32 and --target-arch=x86_3212:03
ssam2but if that's how it works, how does it know to flip --host-arch over to x86_32 for stage2 onwards?12:03
tristanSigh....12:06
tristanssam2, for the whole duration of gnu-toolchain, --host-arch is x86_64 and --target-arch is x86_32 (in the said example)12:07
ssam2ok12:07
tristanthis has nothing to do with stage2 vs stage212:07
tristanerr stage1 vs stage212:07
ssam2is the --host-arch the same throughout the build?12:07
tristangnu-toolchain has to guarantee that everything it built for --target, was never run.12:08
tristanssam2, yes, throughout the whole thing, it's only for gnu-toolchain, and bsp12:08
tristanonly things which can cross build12:08
ssam2ok12:08
ssam2the thing that has kept confusing me is the kernel, in fact12:09
tristanOnce that output is produced, it was built with the requirement of a host-arch sandbox12:09
ssam2in the cross sandbox world, we need to cross build a kernel, so let's say we build linux.bst with --host-arch=x86_64 and --target-arch=x86_3212:09
tristanand it produced something that can run in a target-arch sandbox12:09
tristanssam2, So... for now... we dont have to worry too much, but one thing this conversation revealed is that elements or projects which require host or target sandboxes need to be declarative in some way12:10
tristanfor the virtualization thing to work12:11
tristanbut we need not address that right now12:11
tristanssam2, it seems to me that projects might be a good place to make this distinction; once we get recursive pipelines we could group things in such a way that a given project requires the host arch, while a depending project requires target arch for staging it's dependencies12:12
ssam2so the bootstrap would be a project, and the rest would be another project?12:12
tristanbut anyway, it need not be decided right now, until we actually do virtualized sandboxes12:12
ssam2it's useful to consider though12:13
ssam2what keeps setting me backwards is thinking about a theoretical 'linux.bst'12:13
tristanssam2, that has been my aim since day one yes; actually I would prefer that a project like Baserock be something like 6 or 7 projects12:13
ssam2let's say linux.bst can be cross-built in the host sandbox in order to bring up a target sandbox, but then can also be native built in the target sandbox to produce a kernel for the final image12:14
ssam2but if --host-arch and --target-arch differ, it'll always try to cross-compile12:14
tristanI hope that we dont need a kernel for the virtualization and we can get away with better performance in user mode12:14
tristanBut yeah, if we need real kernel full VMs, then it needs to stage a runtime and a kernel12:15
tristanssam2, so what you are saying, is not true if we have a way to declaratively say that "this project needs host-arch sandboxes"12:16
tristanOr "this element"12:16
ssam2that seems the wrong way to look at it12:16
ssam2from the POV of a definitions author, you want a single kernel definition12:16
tristanRrrrright12:17
tristanAnd ?12:17
ssam2so it needs a host-sandbox or a target sandbox depending on whether its being native built as part of an image build, or cross built as part of bringing up a target sandbox12:17
tristanssam2, So if you are building natively and not cross building, then host-arch == target-arch, but the kernel is a cross-capable build and thus *requires* host-arch sandbox12:18
ssam2but there may be two kernels in a single build12:18
ssam2let's say I'm building for an ARM box in an X86 sandbox on MIPS, again12:19
ssam2I need an x86_64 kernel to bring up the x86_64 sandbox12:19
tristanRight12:19
ssam2and then a native-built ARM kernel12:19
tristanOkay, that's a good point12:19
tristanLet's talk about it in 6 months :)12:19
tristanssam2, or do you think this model will pose insurmountable problems at that stage ?12:19
ssam2not insurmountable problems12:19
tristanI think maybe you have pointed out a need for the same element to arise twice in a pipeline, but formatted differently12:20
tristanWhat's important is that we have a model that makes sense in the first place12:20
ssam2the simple solution would be to have a 'host-linux' element as part of the bootstrap and a 'linux' element later on12:20
ssam2I guess the host-linux kernel would be special anyway, as the target sandbox needs some way of knowing that "this is my kernel"12:21
tristanThat would be the hack yeah12:21
tristanAnother way to look at it is, one project can depend on an arch of another project12:21
tristanAnd a project can depend on itself12:21
tristanThat way we invoke the loader twice12:21
ssam2which would effectively provide a way for host-arch to change during a pipeline ?12:22
* ssam2 head spinning again12:22
ssam2as long as you're aware of the kernel issues then I feel I can get on with the initial work, anwayy12:23
ssam2*anyway12:23
tristanjonathanmaw, is there a reason you had to do this nested loop: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/scriptelement.py#L204 ?12:23
ssam2one final question: should every command grow `--target-arch` and `--host-arch` ?12:23
tristanjonathanmaw, instead of simply self.search(Scope.BUILD, ...) ?12:24
tristanssam2, yeah I think so... I have been contemplating moving that to the main group (e.g. bst --arch build ...)12:25
tristanbut I think there are some things which dont need it ?12:25
jonathanmawtristan: I think the problem I had was that there was no way to make search no search recursively.12:25
ssam2tristan: given that arch conditionals exist, I think everything does need it12:25
jonathanmawhrm, maybe not12:26
tristanjonathanmaw, in what case do you not want search to search recursively though ?12:27
tristanthat's sort of the point12:27
jonathanmawah, I see. It was because it was only meant to find a Scope.RUN dependency in one of the element's Scope.BUILD dependencies. Does an element's Scope.RUN dependencies automatically include the Scope.RUN dependencies of all its Scope.BUILD dependencies?12:28
tristanmaatIs there a working bootstream project for gnu-toolchain somewhere? The gnu-toolchain branch in bootstream-tests doesn't seem to work :/12:32
ssam2tristanmaat, try branch sam/buildstream of https://gitlab.com/baserock/definitions12:32
ssam2it got discussed a bit on the baserock-dev mailing list: https://listmaster.pepperfish.net/pipermail/baserock-dev-baserock.org/2017-June/thread.html12:33
ssam2which you should subscribe to, by the way -- https://listmaster.pepperfish.net/cgi-bin/mailman/listinfo/baserock-dev-baserock.org12:33
tristanjonathanmaw, let's put it this way: If an element depends on another element for the purpose of building, it is assumed that element needs to run12:33
ssam2tristanmaat: that said, gnu-toolchain from buildstream-tests should work -- what issue are you having ?12:33
tristanjonathanmaw, but aside from the typo in your question (I think), your answer is yes: An element's BUILD dependencies imply the RUN dependencies *of those BUILD dependencies*12:34
tristantristanmaat, I may have left that branch stale I dont know, I thought it was building... otherwise try the 'build-gnome' branch12:36
tristantristanmaat, When you say "it doesnt work", do you mean that stage2-make breaks at configure time, desiring the presence of aclocal-1.14 ?12:36
tristantristanmaat, and you have not pulled master in the last 2 hours or so ?12:37
tristan(of buildstream, where I recently corrected that regression ?)12:37
tristanmaatHang on...12:37
tristanmaatNo new commits, so I am up-to-date12:39
tristanmaat bst checkout gnu-toolchain.bst test12:40
tristanmaatLoading:   03012:40
tristanmaatResolving: 030/03012:40
tristanmaatChecking:  030/03012:40
tristanmaat12:40
tristanmaatERROR: Artifact missing for gnu-toolchain/gnu-toolchain/0e4186ada10a17bcee4f98acf7b29e879d3f3b73941b6987c12:40
tristanmaat0bdbbb9d1b5ef4d12:40
tristanmaatBuild actually works12:40
tristanmaatCuriously12:40
tristanOh ?12:46
tristanmaat... I am misunderstanding checkout, right? It needs to be built first12:46
tristanmaatDamnit12:46
tristantristanmaat, ok then you probably caught me in the act of regressing something else, and gnu-toolchain branch is not a problem12:46
tristanI need 15 minutes to push this other fix and probably fix that at the same time...12:47
tristanmaatOk it's not priority at the moment anyway12:47
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 5 commits (last: sandbox.py: Added 'artifact' keyword argument to mark_directory() API) https://gitlab.com/BuildStream/buildstream/commit/9b32e105b1662781b87b6df21124f5e52097c2fc13:03
tristanOk... jonathanmaw... I fixed those issues and image deployments work again \o/ !13:03
jonathanmaw\o/13:03
tristanAlso, I simplified the script element to remove those complex searches and replaced with self.search(Scope.BUILD)13:03
tristanwhich will just work13:03
tristannow let's see about a checkout regression13:03
tristantristanmaat, first, run `bst show gnu-toolchain.bst`, is the artifact really cached ?13:06
tristantristanmaat, second, I am not seeing that problem, *however*, I have another issue; bst checkout is not doing something I think it should be doing13:06
jonathanmawtristan: that's odd, I just did a test comparing the contents of self.dependencies(Scope.RUN), and dependencies(Scope.RUN) for every Scope.BUILD dependency of the element, and they looked very different. https://pastebin.com/tVgKXxb113:06
tristanjonathanmaw, ...13:08
tristan<tristan> Also, I simplified the script element to remove those complex searches and replaced with self.search(Scope.BUILD)13:08
tristanjonathanmaw, read: self.search(Scope.BUILD...)13:08
tristannot self.search(Scope.RUN)13:08
jonathanmawaha13:08
tristanjonathanmaw, try it again but put Scope.BUILD in the self13:08
tristanmaattristan: Sorry, had my browser over this... I misunderstood the checkout command, I thought it would build if the artifacts weren't cached.13:09
tristantristanmaat, ahh, yeah it doesnt do that... but still, what you want; wont work13:10
tristanmaatWhat is the other issue though?13:10
tristanI'm a bit perplexed13:10
tristanWhat should it do13:10
tristancurrently it checks out an artifact into a directory, which is normally expected to be pipeline output13:10
tristantristanmaat, but the thing is, gnu-toolchain.bst is a symbolic stack element13:11
tristanSo, if you run `bst checkout gnu-toolchain/stage2-binutils.bst`13:12
jonathanmawtristan: yep, doing Scope.BUILD made them come out identical13:12
tristanYou will get exactly that13:12
tristanA checkout of stage2-binutils.bst in the specified directory13:12
tristanBut none of it's dependencies13:12
tristanmaatSo if you try gnu-toolchain.bst you get nothing?13:12
tristanYeah, you get the empty artifact13:13
tristanmaatOh. Handy13:13
tristanand return status 0, no error, successful checkout of empty gnu-toolchain.bst13:13
tristanjonathanmaw, anyway I cleaned it up, much less verbose this way :)13:14
tristanSo, I guess it would be logical to stage and integrate into the target directory13:14
tristanAnd have a --scope argument like other bst commands do13:14
tristanBecause... `bst checkout` was originally intended to get at something complete... i.e. a pipeline will have outputs like system images, it works fine for that13:15
tristanBut... since those system images themselves will not have integration commands, or runtime dependencies... I guess it doesnt hurt the original use case, to make `bst checkout` do a stage of dependencies and integration steps13:16
tristanOk, I've been wanting to clean that up anyway, I'll do it now13:17
tristanmaatI suppose as a workaround for now I can just checkout each individual element?13:18
tristantristanmaat, nah that's not really as good13:18
tristanWell, you *could*13:18
tristankeeping in mind, you only need the elements in stage313:18
tristantristanmaat, actually in your case it's pretty fine13:19
tristantristanmaat, you will want to run ldconfig yourself when chrooting into there, but it's probably not even really needed13:19
tristananyway I'll try to get a better bst checkout done now...13:20
tristanmaatGood, I thought for a second I completely misunderstood how this works...13:20
tristangot something almost working...13:47
tristannice13:53
tristanOk little bug, but it's unrelated and we can fix it later14:03
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 5 commits (last: sandbox.py: Dont ensure directories at mark_directory() time) https://gitlab.com/BuildStream/buildstream/commit/9ed44f16885343100407ef267df20b5b6c7aea7d14:08
tristantristanmaat, alright it's done, but there will be some caveats to that long term afaics, maybe we can do tarball checkouts that do it better after fixing #3814:08
tristanI.e. you will still have the same issue in regular flat checkouts, where suid/sgid bits will be stripped for security, and every file will belong to the checking out user14:09
tristanbut for your purpose it will work well14:09
tristanand when we have awesome #38, then we can do similar, use introspection of the artifact cache real data to populate attributes in a tarball14:10
tristanmaatAt least I won't have to check out dependencies manually for the time being14:10
tristanmaat:)14:10
* tristan has an issue when doing bst checkout libsecret.bst from the gnome project, oddly; it has managed to set some xattrs which I am not allowed to copy over to a checkout directory14:11
tristannah, it works quite nicely :)14:11
tristanOKAY !14:12
tristanI think all the current fires are out :)14:12
* ssam2 flicks ligher14:14
ssam2*lighter14:14
tristanalright dont burn the house down ssam2 !14:21
tristanSo I guess, tomorrow I have to try and examine this artifact cache sharing stuff...14:22
tristanAnd soon I have to get back to making GNOME builds work14:23
tristansigh, todo list... getting long... I also want to add a symbol that I can use to check the version of ostree with14:23
*** jude has quit IRC14:32
*** tristan has quit IRC14:33
*** tristan has joined #buildstream15:01
*** tristanmaat is now known as tlater15:03
*** jude has joined #buildstream15:33
*** jonathanmaw has quit IRC15:46
tlaterIs the assemble step in the build process managed entirely through plugins?16:19
ssam2there's some code in buildelement.py16:21
ssam2I think the plugins just provide the commands to run16:21
tlaterWhat commands should I run then? Morph doesn't have that implemented, at least not in the script16:22
tlater-> To build a package16:23
ssam2each element knows what commands need to be run to build that element16:24
ssam2you'll probably need to add a method to the Element or BuildElement class to actually get that info out in a useful form16:24
tristanI changed this a bit last week16:24
tristanfor flexibility, I broke down ->assemble into 3 stages: ->configure_sandbox(), ->stage() and ->assemble()16:25
tristanYou will want to add a new method to Elements which is only optionally implementable, to output the script with formatted variables and everything16:26
tristanI suppose that not implementing it fires ImplError and the engine can just tell you that it cant do anything with elements which fire that16:27
tlaterThat could go into the stage() step, then?16:27
tristanstage is for filling up the build sandbox with data16:27
tristanI dont think that you will want to use any of the existing methods there16:28
tristanFor the creating of source directories, the buildstream core already provides everything you need (because of Source->stage())16:28
tristanFor serializing scripts, some creative design is needed16:29
*** jude has quit IRC16:30
tristanI.e. keep in mind that the fact that build elements run shell scripts, is really just a detail of build elements; other elements can do things without that at all16:30
*** jude has joined #buildstream16:30
tristanAlso, I was hoping to allow script elements at least to use other interpretors than shell16:30
tlaterThat last bit shouldn't be too hard, at least16:31
tristanin any case, you probably only care about BuildElement implementations, and ensuring that stacks do a noop instead of telling the engine ImplError16:31
tristantlater, true but for the time being I dont think we support ScriptElement derived things for this16:31
tristanScriptElements dont accept any sources, and they let you stage multiple artifacts at different places in the sandbox16:32
tristanSo for instance, you can use host tools A, to deploy sysroot B into an image16:32
tristanit would be nice to have other things than just shell in build elements, but again; it's not really useful, every source module I know of responds to shell commands to build :)16:33
* tristan has never typed: python3 ... ok lets run some python commands inside the interpretor to run 'make' :)16:33
tristanthat said; other-than-shell stuff has happened in the past, RPM spec files support them for postinst scriptlets for instance16:34
tlaterSo "keep support in mind, but don't implement"?16:34
tristanYeah, better to focus on only BuildElement and Stack implementation16:35
tristandont go on a tangent hehe16:35
tristanstarting from here: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/buildelement.py#L17616:36
tristanYou can see how the BuildElement formats the commands, which were substituted in the function below with self.node_subst_list_element()16:37
tlaterYup, that makes it a lot easier16:38
tlaterI was digging through Element16:38
tristanInstead, those need to be outputted to some file handle the engine passes to it16:38
tristanSo, Element will certainly need an API for composing a script, which fires ImplError for most everything not concerned with that16:38
tristanbut BuildElement mostly needs to implement that16:39
tristanAlso, you will want the calling code in _pipeline.py, to be allowed to override variables16:39
tristanWhich might be tricky16:39
tristantlater, because you need to override variables during the load process, since they are normally substituted directly at instantiation time16:40
tristanThat is done here: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_pipeline.py#L27916:41
tlaterHow does loading work then? I thought variables could be defined there as well?16:41
tristandirectly during Pipeline.__init__()16:41
tristanbefore the pipeline has any idea of what you want to do with it16:41
tristanSo, I look forward to seeing your creative solution to re-arrange that code so that it can be done, and doesnt look too spaghetti :D16:42
tlaterWish me luch... x)16:43
tlater*luck16:43
tlaterDamnit16:43
tristanheh, it's almost 2am, so I wont get too deep into figuring solutions for that right now :)16:43
tlaterThat's a fair point, I need to go anyway16:44
*** jude has quit IRC16:49
*** jude has joined #buildstream16:49
*** tlater has quit IRC16:56
*** jude has quit IRC17:03
*** ssam2 has quit IRC18:25
*** tristan has quit IRC20:09
gitlab-br-botbuildstream: merge request (sam/traceback-fixes->master: Replace a few tracebacks with actual error messages) #24 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/2422:28
gitlab-br-botbuildstream: merge request (sam/traceback-fixes->master: Replace a few tracebacks with actual error messages) #24 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/2422:32

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!