*** juergbi has quit IRC | 02:11 | |
*** juergbi has joined #buildstream | 02:11 | |
*** tristan has joined #buildstream | 05:38 | |
*** ChanServ sets mode: +o tristan | 06:51 | |
*** jude has joined #buildstream | 07:03 | |
*** tristanmaat has joined #buildstream | 08:22 | |
*** tristanmaat has quit IRC | 08:24 | |
*** tristanmaat has joined #buildstream | 08:25 | |
*** jonathanmaw has joined #buildstream | 08:25 | |
*** tristanmaat has quit IRC | 08:26 | |
*** tristanmaat has joined #buildstream | 08:26 | |
*** tiagogomes has quit IRC | 08:46 | |
*** tiagogomes has joined #buildstream | 08:46 | |
tristan | jonathanmaw, you have been building build-gnome branch right ? | 09:01 |
---|---|---|
tristan | jonathanmaw, with your MR applied which I forgot to apply until now ? | 09:01 |
jonathanmaw | tristan: yep | 09:02 |
*** ssam2 has joined #buildstream | 09:02 | |
tristan | jonathanmaw, I'm having trouble with stage2-make | 09:02 |
tristan | it's telling me it wants aclocal-1.14 and it doesnt exist | 09:02 |
tristan | while clearly, aclocal-1.15 is *right there* | 09:02 |
tristan | maybe it's my changes... | 09:03 |
ssam2 | this might not be relevant, but stage2-make is probably building a tarball committed to git | 09:04 |
juergbi | tristan: did you manage to get artifact sharing working with my instructions? any feedback from your side? | 09:04 |
ssam2 | actually might be a timestamps issue | 09:04 |
tristan | juergbi, unfortunately... it's been a bit of a messy week and I didnt get there :'( | 09:04 |
tristan | ssam2, yeah was thinking that, I have a thing which sets the mtime across the board already but maybe I fudged it ? | 09:04 |
tristan | anyway lemme investigate | 09:05 |
juergbi | ok, no problem | 09:05 |
tristan | juergbi, last week was a mess of ostree investigations | 09:06 |
tristan | juergbi, which concluded in my implementation of a 'copy-on-write' hardlink fuse layer, and a stronger game plan for fakeroot-over-fuse: https://gitlab.com/BuildStream/buildstream/issues/38 | 09:07 |
tristan | the last piece of the puzzle (ostree-wise) is https://github.com/ostreedev/ostree/issues/925 | 09:08 |
juergbi | ok, taking a look | 09:08 |
tristan | Which colin inappropriately renamed, this is not anything to do with rofiles-fuse | 09:08 |
ssam2 | I guess Colin is still thinking that everyone should switch to read-only /usr | 09:10 |
tristan | ssam2, I dont think so... I think he just missed the point | 09:12 |
tristan | in fifth comment (since linking to github issue comments is not as easy as bugzilla somehow), he notes: | 09:13 |
tristan | "That type of thing is definitely part of the original idea; it's really the reason why OstreeRepo and OstreeSysroot are distinct layers." | 09:13 |
tristan | So I think this falls into the mission of ostree | 09:13 |
tristan | well enough | 09:13 |
tristan | Ok verified I screwed it up, if I back up in time to pre-copy-on-write hardlinks, stage2-make builds | 09:14 |
tristan | So I have to think about how I fudged it up | 09:14 |
tristan | Ah, that was fast :) | 09:18 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 1 commit (last: element.py: Fix regression with setting deterministic mtime on source stages) https://gitlab.com/BuildStream/buildstream/commit/82523346be14496b1aa10f9d45ddced34b53233c | 09:21 |
tristan | juergbi, if you're interested, there was also a thread here https://mail.gnome.org/archives/ostree-list/2017-June/msg00011.html, but I think it's largely beside the point now that I've figured things out a bit more | 09:44 |
tristan | juergbi, so we went through a few stages with ostree... the first was assigning all files uid/gid 0 and user mode checkouts | 09:45 |
tristan | juergbi, *but* there was a couple of bugs which got fixed since in ostree, one of them included executable bits being completely lost, *unless* the special uncompressed-object-cache was explicitly in use at checkout time (which wasnt obvious) | 09:45 |
tristan | that was a regression we didnt notice, but caused the gitlab runners to malfunction (and anyone with a bleeding edge ostree) | 09:46 |
tristan | juergbi, then... I needed to hack around the hardlinks getting corrupted after doing the GNOME auto-conversions, because we need to run a mutation script which does `dpkg --configure -a` on an imported debian sysroot | 09:47 |
tristan | So I hacked around that and made the artifact cache use archive-z2, which temporarily swept that under the rug | 09:47 |
tristan | In the meantime, there was another ostree bug, which was (since forever), user mode checkouts are always "(mode | 0755) &= ~(suid|gid)" | 09:48 |
tristan | s/gid/sgid | 09:48 |
tristan | So now, after talking about this on the above thread, we agreed that if we cannot have arbitrary uid/gid, then it's pointless to argue for an insecure case of allowing the build user to own an suid/sgid file, but that bug has been fixed so that user mode checkouts are always "mode &= ~(suid|sgid)" | 09:50 |
tristan | After this, I did copy-on-write fuse layer, and reverted the artifact cache back to it's initial form | 09:50 |
tristan | So | 09:50 |
tristan | o Files always committed uid/gid 0 | 09:50 |
tristan | o Files always checked out with original permissions, builder user ownership, and no suid/sgid | 09:51 |
tristan | o Files written to in the staging area become copies automatically, no artifact cache corruption | 09:51 |
tristan | juergbi, that is present state of artifact cache | 09:51 |
juergbi | ok, thanks for the summary | 09:52 |
tristan | Addressing issue #38 (which I filed like, yesterday)... will do fakeroot-over-fuse and cooperate with the artifact cache, and allow everything you ever wanted (xattrs, arbitrary UID/GIDs, SUID) in the build sandbox and build outputs | 09:52 |
tristan | Also it can guarantee that the user never actually owns an suid 'sudo' anywhere | 09:53 |
tristan | but for now, without that we can still produce relatively sane system images | 09:53 |
tristan | they just dont have arbitrary uid/gid ownerships, and lack suid things | 09:54 |
tristan | (or xattrs, but those are fairly rare too) | 09:54 |
tristan | Anyway, as you can see, it was a wild week :) | 09:54 |
juergbi | :) | 09:54 |
tristan | juergbi, regarding the artifact cache sharing, which I had not had time to look at unfortunately... I have some things I'm curious about :) | 09:55 |
tristan | A.) When a build can be downloaded, are we able to eliminate it's build dependencies from the pipeline ? | 09:56 |
tristan | B.) Even more fun... can we do it in `bst build --track` mode ? | 09:56 |
tristan | hehehe | 09:56 |
juergbi | A) not right now, might be possible once/if we use summary file where we could check availability early on (although this could be problematic in case connection to artifact server fails in the middle of bst run) | 10:00 |
tristan | This is very interesting actually, in --track mode it means... I have not seen this commit sha of systemd before, but someone else may have already built it... after getting the latest sha; element moves on to the artifact pull queue... but once it gets there; can we say that; we already have a systemd artifact available, so there is no need to process, at all; any elements which are build-only dependencies for it (as long as not referenced by | 10:01 |
tristan | anything else we do need to build) | 10:01 |
tristan | juergbi, also, I have a feeling that we can query the ostree for what refs it has without using a summary file... but I might be wrong | 10:02 |
tristan | I think a very acceptable approach though, is to do that query once only at the beginning of the build | 10:02 |
tristan | (like, lets not care that someone built it before us, in the middle of a build, or at least lets not try to recalculate build plans in that case) | 10:03 |
juergbi | via sshfs we could query using normal filesystem operations without summary file, however, that might be one (remote) check for each artifact (given they are all in different directories, unless SFTP supports something like recursive listings) | 10:04 |
juergbi | for http-only servers, one HEAD request per artifact could work | 10:06 |
juergbi | pipelining should theoretically be possible in the HTTP case | 10:06 |
tristan | juergbi, hmmm one sec... | 10:14 |
tristan | juergbi, :) | 10:17 |
tristan | This command over http gave me a list: | 10:17 |
tristan | ostree remote refs --repo ~/.cache/buildstream/sources/ostree/https___sdk_gnome_org_repo_/ origin | 10:17 |
tristan | juergbi, so assuming that the artifact cache is setup to have a remote setup for the artifact share, some ostree API can give us that, and I dont think it requires a summary file be present in the remote | 10:18 |
juergbi | tristan: have you confirmed that this list actually includes all remote refs and not just the ones that have been pulled? | 10:21 |
juergbi | it looks like a local command to me | 10:21 |
ssam2 | tristanmaat, there's a couple of ways of making the size of the bootstrap tarball more managable | 10:22 |
ssam2 | tristanmaat, one is, we don't actually need all the components that are used in a normal build. The Baserock definitions have a separate stratum named 'cross-bootstrap' (maybe not the best name) which contained just enough to run Morph: http://git.baserock.org/cgit/baserock/baserock/definitions.git/tree/strata/cross-bootstrap.morph | 10:23 |
ssam2 | tristanmaat, I think we'll need something similar here. But I think you should focus on getting the command working first | 10:24 |
ssam2 | tristanmaat, so I suggest writing a testcase that just tries to produce a source bundle for a couple of small components (let's say fhs-dirs and busybox perhaps ...) | 10:24 |
ssam2 | tristanmaat, make the `source-bundle` command produce a bundle that you can build on your laptop | 10:25 |
tristanmaat | Ok, yeah, that sounds reasonable. | 10:25 |
juergbi | tristan: it lists all refs that are locally stored in refs/remotes. however, have to check how that directory is updated | 10:25 |
juergbi | i suspect it's only for actually pulled refs | 10:25 |
tristanmaat | Should I merge changes to the main repo into it or continue on your private copy? Or just restart on the main repo? | 10:25 |
tristanmaat | *from the main repo | 10:25 |
tristan | juergbi, oh... sure about that ? strange | 10:26 |
ssam2 | tristanmaat, consider it a github style workflow. So make your own personal fork to work in | 10:26 |
ssam2 | tristanmaat, when it's done you can send a merge request to the main repo | 10:26 |
juergbi | tristan: i'll have to do some more digging | 10:27 |
ssam2 | tristanmaat, and start from my changes if you like, but you may as well pull them into your fork. Makes more sense than me giving you access to my personal fork. | 10:27 |
tristan | Ok so ssam2, tristanmaat... what's going on here I'm a bit lost; are we going to put the parse validation stuff aside for now ? | 10:30 |
tristan | That is fine with me fwiw | 10:31 |
ssam2 | i don't mind either way. Probably better to move onto the source-bundle work if tristanmaat feels ready though | 10:32 |
tristanmaat | Yeah, we are. I'm starting to work on the tar bundle command, until either I finish or this project is more urgent. | 10:32 |
tristanmaat | Laurence told me to, so I'll just do that. | 10:32 |
tristan | tristanmaat, yeah ok that is fine | 10:34 |
tristan | But I want to know what's going on before something random comes in the form of an MR, for this source bundle thing | 10:34 |
ssam2 | in how much detail ? | 10:35 |
tristan | ssam2, tristanmaat so what is the output of this, first of all ? Is it a directory with staged source code in many element named subdirectories, plus a main script and many element specific scripts ? | 10:36 |
ssam2 | my thought is that yeah, it should generate exactly that | 10:36 |
tristan | ssam2, in enough detail that I have an idea what's going on, because I need to know it makes sense before I start making demands for expensive changes on your teams behalf | 10:36 |
ssam2 | sure... my issue here is that I find prototyping to be the only way of working out how things should work | 10:37 |
tristan | So for what is the Sources, this can be done with current APIs | 10:37 |
tristan | ssam2, except you have me available and I have a really good idea of what can go wrong :) | 10:37 |
tristan | so I will save you some time | 10:37 |
* tristan is building an image so computer is intermittently locking up... | 10:38 | |
tristanmaat | tristan, I'll be working on this particular command mostly | 10:39 |
ssam2 | i did an intial hack here: https://gitlab.com/samthursfield/buildstream/tree/sam/source-bundle | 10:39 |
ssam2 | which indeed, is able to stage the sources without much hassle | 10:39 |
tristan | Ok so... the command will have to run a pipeline in the case that the sources need to be tracked or fetched | 10:40 |
tristan | That can be determined before trying to stage the sources | 10:40 |
tristan | so only optionally | 10:40 |
tristan | But the tricky part is going to be serializing scripts | 10:40 |
ssam2 | right. is track useful here? I was wondering if we could ignore it, since this is just for bootstrapping | 10:40 |
tristan | ssam2, if we have it for build, we should have it there for consistency, however it should not take more than an hour really, assuming everything else is done | 10:41 |
ssam2 | ok | 10:41 |
tristan | I.e. we already have it for build, the code is there, it's just a matter of constructing the scheduler queues with Track in advance of Fetch | 10:41 |
ssam2 | and doing twice as much testing | 10:42 |
tristan | Also, note that this is certainly *not* just for bootstrapping | 10:42 |
ssam2 | ah, ok | 10:42 |
ssam2 | what are the other use cases? | 10:42 |
tristan | In fact, this is only for *after* bootstrapping, and before the target host is capable of running buildstream | 10:42 |
tristan | this is the low level middleware | 10:42 |
tristan | Right ? | 10:42 |
tristan | It *can* contain a bootstrap | 10:42 |
ssam2 | hoping to avoid another debate about the meaning of "bootstrap", so OK | 10:43 |
tristan | but the point is to be able to create a script bundle which can build all of the buildstream dependencies | 10:43 |
ssam2 | it's for producing a system that can run BuildStream in the absense of existing native host tools | 10:43 |
ssam2 | and a cross sandbox | 10:43 |
tristan | Correct, ok so there is one "gotcha" I can see | 10:43 |
tristan | The target system which will run this, will of course need to have some base tools and need to have chroot I guess | 10:44 |
ssam2 | I think busybox can provide that if needed | 10:44 |
tristan | So I *guess* that every build on that target happens basically like in BuildStream but as root in a chroot ? | 10:44 |
tristan | Or, do you install everything to /usr ? | 10:44 |
tristan | Yeah that's not the gotcha | 10:44 |
tristan | The main gotcha: Not every buildstream element can support this | 10:45 |
ssam2 | in Morph, seems what happens is that it runs the normal install-commands | 10:45 |
tristan | ssam2, So probably at the beginning of running this `bst source-bundle` command, we need to check the elements in the pipeline for support | 10:45 |
ssam2 | right | 10:45 |
tristan | ssam2, in BuildStream, not every element is a BuildElement | 10:45 |
ssam2 | ImportElement in particular is not wanted, and should actually cause an error since it'd be trying to import host tools that don't exist | 10:46 |
tristan | ssam2, which also means you need some sort of special semantic for eliminating parts of the pipeline you load | 10:46 |
tristan | Maybe something like --start-from | 10:46 |
tristan | Anything below --start-from gets cut off of the pipeline ? | 10:47 |
ssam2 | hmm, that could work | 10:47 |
ssam2 | I was thinking we'd need a separate stack just for cross-bootstrap. Since we also need to avoid having 8 copies of gcc.git if possible | 10:47 |
ssam2 | but, --start-from would allow that | 10:47 |
ssam2 | and it'd be very nice to avoid having a duplicate stack | 10:47 |
ssam2 | so yeah I like that | 10:47 |
tristan | Not sure it does for every pipeline, we could have multiple imports for various build-only dependencies, so --start-from might have to be something like --sever | 10:47 |
tristan | And can be repeated... bst source-bundle --server elementa.bst --sever elementb.bst | 10:48 |
ssam2 | ah right, because for example an element for Rust might try to pull in a binary blob of a precompiled Rust compiler | 10:48 |
tristan | Something like that could happen | 10:49 |
ssam2 | OK | 10:49 |
tristan | It's only a matter of fitting it into how buildstream works | 10:49 |
tristan | since it can happen, better to allow severing of multiple dependency chains | 10:49 |
tristan | So probably, this is only supported by BuildElement and Stack | 10:50 |
tristan | And the base generated script has a way of running these | 10:50 |
tristan | ssam2, anyway, I think that's enough... just wanted to stick my nose in and have an idea of what's going on | 10:50 |
tristan | I'm satisfied :) | 10:50 |
ssam2 | its been helpful :-) | 10:51 |
tristan | ssam2, not sure how morph did it, but probably by substing the install-root to '/', you could have a serialized build model which installs everything directly into the same chroot dir | 10:54 |
ssam2 | seems what Morph did was install to /%{name}.inst | 10:54 |
tristan | then maybe you could move a symlink from /buildstream/build -> real build in between builds or such | 10:54 |
ssam2 | then run `(cd /%{name}.inst; find . | cpio -umdp /)` | 10:54 |
ssam2 | ugly, but I guess it worked | 10:55 |
tristan | myeah | 10:55 |
ssam2 | it didn't use a chroot at all | 10:55 |
tristan | So it builds directly onto the slash | 10:55 |
ssam2 | yeah | 10:55 |
tristan | yeah that can also be fine for this purpose I guess | 10:55 |
tristan | still I would recommend just setting install-root to / when doing substitutions on build elements | 10:56 |
ssam2 | yeah, same effect but simpler | 10:56 |
tristan | the only problem I could see is badly written buildscripts choking on DESTDIR=/ | 10:57 |
tristan | but I doubt it | 10:57 |
tristan | ssam2, regarding the chroot... I *think* that actually, if someone wanted to try it... they could always manually put build-essential into a directory, chroot into that, and run the source-bundle output directly in there | 10:58 |
tristan | so I guess installing directly to / is interesting in that light also | 10:58 |
ssam2 | for testing, you mean ? | 10:59 |
tristan | Yeah | 10:59 |
tristan | Also, I *think* you want to sever build-essential (or 'gnu-toolchain') from the source-bundle anyway | 10:59 |
ssam2 | very much so | 11:00 |
tristan | I.e. you will always need a kernel and a cross-built runtime placed onto the host anyway | 11:00 |
tristan | I guess you are working on that part while tristanmaat looks at the other | 11:00 |
ssam2 | yeah, I'm looking at cross-building a suitable sysroot | 11:00 |
ssam2 | which i'll want to discuss... but probably tomorrow | 11:01 |
tristan | ssam2, so for testing (before cross bootstrap)... tristanmaat can easily test with `bst checkout gnu-toolchain.bst` | 11:01 |
tristan | and chroot into the checkout | 11:01 |
tristanmaat | And in there run the tarball | 11:01 |
tristanmaat | Cool | 11:01 |
tristan | right | 11:01 |
tristan | the magic self extracting executable tarball :) | 11:02 |
tristan | (joking) | 11:02 |
tristan | (but doable) | 11:02 |
ssam2 | i'm struggling to reconcile architecture conditionals with cross compile support | 11:31 |
ssam2 | there are two places in the converted Baserock definitions that use architecture conditionals | 11:32 |
ssam2 | in the project.conf it sets the compiler target for stage1 and host architecture for stage2, so the 'arches' conditional should follow the cross architecture (target) if set | 11:32 |
ssam2 | but in the base-sdk and base-platform elements, it's used to choose which ref (and which architecture) we pull, and that should always follow the native architecture even when cross compiling | 11:33 |
ssam2 | we could introduce a new 'cross-arches' or 'native-arches' conditional, but seems confusing and a bit ugly | 11:34 |
ssam2 | or we could decide how the 'arches' conditional works based on what type of element we're dealing with, since ImportElement is never going to really have a context of 'cross compiling'. But it seems that _loader.resolve_arch() is called before we know what type of element we're dealing with | 11:34 |
ssam2 | I guess we do know what kind of element we're dealing with by looking at the 'kind' field | 11:39 |
tristan | ssam2, I was having that conundrum when we spoke earlier | 11:40 |
tristan | ssam2, my favorite approach I think, is that there is --host-arch and there is --build-arch, and they both default to --arch | 11:40 |
tristan | ssam2, however it leaves the question, what is --arch in the case that host and target differ | 11:41 |
tristan | sorry, s/--build-arch/--target-arch/ | 11:41 |
ssam2 | sure, if 'arches' follows '--arch' there seems to be no solution that would work | 11:41 |
ssam2 | unless we decide based on element type | 11:41 |
tristan | no ? | 11:41 |
ssam2 | like I said above, the base-sdk and base-platform need --arch to be --host-arch in all cases | 11:42 |
tristan | Well | 11:42 |
ssam2 | but the stage1 and stage2 need --arch to be --target-arch when cross building, and --host-arch when native building | 11:42 |
tristan | ssam2, I think this may just depend on how you setup your pipeline | 11:42 |
tristan | I.e. we depend on base-sdk and base-platform to build natively, I think it makes sense that those are the target | 11:43 |
ssam2 | I don't understand what you mean... | 11:43 |
tristan | ssam2, i.e. in the case that we wanted to virtual cross build, the sandbox would have to depend on a host emulator for that target arch | 11:43 |
ssam2 | right, but that's a whole extra codepath that's just not present yet, right? | 11:44 |
tristan | ssam2, Ok so, you are talking about base-sdk and base-platform like those elements are set in stone, but we depend on them for something specific | 11:44 |
ssam2 | yes | 11:44 |
tristan | ssam2, yes, but I want to keep that codepath in mind all the way through | 11:44 |
tristan | otherwise we wont have something coherent for it | 11:44 |
ssam2 | ok, so you think the host tools should be treated specially somehow? | 11:44 |
ssam2 | that could make sense | 11:44 |
tristan | I mean that when building things, everything ("arches") is --target | 11:45 |
ssam2 | that's fine apart from ImportElement | 11:45 |
tristan | So in a shiney future, we will need to depend on the same base runtimes to be staged in a qemu sandbox for cross target building | 11:45 |
tristan | However I think you are confusing ImportElement and what base-sdk/base-platform *is*, it's just an element we depend on for that project | 11:46 |
tristan | It does not mean that if you want to cross build, you have to depend on *that* import element | 11:46 |
ssam2 | well, ok | 11:46 |
ssam2 | is there a way that an element can select its dependencies based on whether or not we're cross building ? | 11:47 |
tristan | If we add "host-arches" and the command line/context options we spoke of, then one could distinguish | 11:47 |
tristan | I think we already discussed that we *need* to have a different arches conditional in order to do this at all | 11:48 |
ssam2 | ah, I was missing that point | 11:48 |
ssam2 | if we can have two types of 'arches' conditional then there's no issue | 11:48 |
tristan | ssam2, So, that would mean in a project with a cross-capable bootstrap, we would have that base import depend on host-arches | 11:48 |
tristan | Exactly, thats certainly necessary | 11:48 |
ssam2 | so 'host-arches' always follows '--host-arch', and 'arches' follows '--target-arch' (which defaults to '--host-arch') | 11:49 |
tristan | ssam2, so I think; we have the desired thing... we have --host-arch and we have --target-arch command line options, and they both default to --arch, which in turn defaults to `uname` | 11:49 |
ssam2 | ok | 11:50 |
tristan | then exactly as you said, we have host-arches conditionals (new) and the existing arches conditionals (existing = target) | 11:50 |
ssam2 | as a separate thing, when would '--host-arch' differ from '--arch'? | 11:51 |
tristan | ssam2, I am struggling as to whether we should bite the bullet and deprecate `arches` for a more consistently named `target-arches` | 11:51 |
ssam2 | i'm not sure on that point either | 11:51 |
tristan | ssam2, when you are on a mips system, and you only have an x86 runtime to import, in order to produce an arm system. | 11:51 |
tristan | :) | 11:52 |
ssam2 | OK. But in that case, what purpose does --arch serve? | 11:52 |
tristan | In that case your sandbox will require emulation for both host and target architectures | 11:52 |
ssam2 | you'd have --arch=mips, --host=arch-x86_64, --target-arch=arm | 11:52 |
tristan | ssam2, that should produce an error I think, or a warning | 11:52 |
ssam2 | but specifying --arch=mips is pointless since it represents something that can't be changed (the physical machine architecture) | 11:52 |
tristan | "WARNING: Meaningless option --arch specified" | 11:53 |
tristan | <tristan> ssam2, so I think; we have the desired thing... we have --host-arch and we have --target-arch command line options, and they both default to --arch, which in turn defaults to `uname` | 11:53 |
ssam2 | right, I understand that part | 11:53 |
tristan | ssam2, as mentioned above, its only convenience | 11:53 |
ssam2 | my question is why bother with --arch if it can only be used to produce error messages | 11:53 |
tristan | its not | 11:53 |
tristan | ssam2, the usual case is that you dont need to specify it... the almost usual case is when you have to specify it because arch names are just not consistent across the board (Just because uname happens to report x86_64 on your 64bit intel, does not mean it will produce x86_32 on your actually i686 machine) | 11:54 |
tristan | ssam2, So at some point, what the project calls a given "arch" is not the same as what uname produces, so it makes sense to let the user specify one that their project understands | 11:55 |
ssam2 | wouldn't you use use `--host-arch=x86_32` if you wanted to build x86_32 on an x86_64 machine though ? | 11:56 |
tristan | No | 11:56 |
ssam2 | i'm curious how this ties in with the commands other than `bst build`, as well | 11:56 |
tristan | I'm curious about that too, but I believe the only thing is can effect is cache keys | 11:56 |
ssam2 | i mean, it has use cases. `bst show --arch=armv7l` can be used to show if I have built an armv7l image already | 11:57 |
ssam2 | but that could equally be `bst show --host-arch=armv7l` | 11:57 |
tristan | ssam2, Ok so, in a world where we do have sandboxes that can do emulation... *both* the host and target arches are not necessarily what your actual computer is running | 11:58 |
ssam2 | yes, I understand that | 11:58 |
tristan | If you wanted to build x86_32 on an x86_64 machine, you would specify --target-arch=x86_32 | 11:58 |
tristan | But | 11:59 |
tristan | In the case that uname does not report x86_64 on a 64bit intel, then you would have to specify both | 11:59 |
ssam2 | so, it'd build stage1 where --host-arch=x86_64 and --target-arch=x86_32 | 12:02 |
ssam2 | then build a stage2 where --host-arch=x86_32 and --target-arch=x86_32 | 12:03 |
ssam2 | but if that's how it works, how does it know to flip --host-arch over to x86_32 for stage2 onwards? | 12:03 |
tristan | Sigh.... | 12:06 |
tristan | ssam2, for the whole duration of gnu-toolchain, --host-arch is x86_64 and --target-arch is x86_32 (in the said example) | 12:07 |
ssam2 | ok | 12:07 |
tristan | this has nothing to do with stage2 vs stage2 | 12:07 |
tristan | err stage1 vs stage2 | 12:07 |
ssam2 | is the --host-arch the same throughout the build? | 12:07 |
tristan | gnu-toolchain has to guarantee that everything it built for --target, was never run. | 12:08 |
tristan | ssam2, yes, throughout the whole thing, it's only for gnu-toolchain, and bsp | 12:08 |
tristan | only things which can cross build | 12:08 |
ssam2 | ok | 12:08 |
ssam2 | the thing that has kept confusing me is the kernel, in fact | 12:09 |
tristan | Once that output is produced, it was built with the requirement of a host-arch sandbox | 12:09 |
ssam2 | in the cross sandbox world, we need to cross build a kernel, so let's say we build linux.bst with --host-arch=x86_64 and --target-arch=x86_32 | 12:09 |
tristan | and it produced something that can run in a target-arch sandbox | 12:09 |
tristan | ssam2, So... for now... we dont have to worry too much, but one thing this conversation revealed is that elements or projects which require host or target sandboxes need to be declarative in some way | 12:10 |
tristan | for the virtualization thing to work | 12:11 |
tristan | but we need not address that right now | 12:11 |
tristan | ssam2, it seems to me that projects might be a good place to make this distinction; once we get recursive pipelines we could group things in such a way that a given project requires the host arch, while a depending project requires target arch for staging it's dependencies | 12:12 |
ssam2 | so the bootstrap would be a project, and the rest would be another project? | 12:12 |
tristan | but anyway, it need not be decided right now, until we actually do virtualized sandboxes | 12:12 |
ssam2 | it's useful to consider though | 12:13 |
ssam2 | what keeps setting me backwards is thinking about a theoretical 'linux.bst' | 12:13 |
tristan | ssam2, that has been my aim since day one yes; actually I would prefer that a project like Baserock be something like 6 or 7 projects | 12:13 |
ssam2 | let's say linux.bst can be cross-built in the host sandbox in order to bring up a target sandbox, but then can also be native built in the target sandbox to produce a kernel for the final image | 12:14 |
ssam2 | but if --host-arch and --target-arch differ, it'll always try to cross-compile | 12:14 |
tristan | I hope that we dont need a kernel for the virtualization and we can get away with better performance in user mode | 12:14 |
tristan | But yeah, if we need real kernel full VMs, then it needs to stage a runtime and a kernel | 12:15 |
tristan | ssam2, so what you are saying, is not true if we have a way to declaratively say that "this project needs host-arch sandboxes" | 12:16 |
tristan | Or "this element" | 12:16 |
ssam2 | that seems the wrong way to look at it | 12:16 |
ssam2 | from the POV of a definitions author, you want a single kernel definition | 12:16 |
tristan | Rrrrright | 12:17 |
tristan | And ? | 12:17 |
ssam2 | so it needs a host-sandbox or a target sandbox depending on whether its being native built as part of an image build, or cross built as part of bringing up a target sandbox | 12:17 |
tristan | ssam2, So if you are building natively and not cross building, then host-arch == target-arch, but the kernel is a cross-capable build and thus *requires* host-arch sandbox | 12:18 |
ssam2 | but there may be two kernels in a single build | 12:18 |
ssam2 | let's say I'm building for an ARM box in an X86 sandbox on MIPS, again | 12:19 |
ssam2 | I need an x86_64 kernel to bring up the x86_64 sandbox | 12:19 |
tristan | Right | 12:19 |
ssam2 | and then a native-built ARM kernel | 12:19 |
tristan | Okay, that's a good point | 12:19 |
tristan | Let's talk about it in 6 months :) | 12:19 |
tristan | ssam2, or do you think this model will pose insurmountable problems at that stage ? | 12:19 |
ssam2 | not insurmountable problems | 12:19 |
tristan | I think maybe you have pointed out a need for the same element to arise twice in a pipeline, but formatted differently | 12:20 |
tristan | What's important is that we have a model that makes sense in the first place | 12:20 |
ssam2 | the simple solution would be to have a 'host-linux' element as part of the bootstrap and a 'linux' element later on | 12:20 |
ssam2 | I guess the host-linux kernel would be special anyway, as the target sandbox needs some way of knowing that "this is my kernel" | 12:21 |
tristan | That would be the hack yeah | 12:21 |
tristan | Another way to look at it is, one project can depend on an arch of another project | 12:21 |
tristan | And a project can depend on itself | 12:21 |
tristan | That way we invoke the loader twice | 12:21 |
ssam2 | which would effectively provide a way for host-arch to change during a pipeline ? | 12:22 |
* ssam2 head spinning again | 12:22 | |
ssam2 | as long as you're aware of the kernel issues then I feel I can get on with the initial work, anwayy | 12:23 |
ssam2 | *anyway | 12:23 |
tristan | jonathanmaw, is there a reason you had to do this nested loop: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/scriptelement.py#L204 ? | 12:23 |
ssam2 | one final question: should every command grow `--target-arch` and `--host-arch` ? | 12:23 |
tristan | jonathanmaw, instead of simply self.search(Scope.BUILD, ...) ? | 12:24 |
tristan | ssam2, yeah I think so... I have been contemplating moving that to the main group (e.g. bst --arch build ...) | 12:25 |
tristan | but I think there are some things which dont need it ? | 12:25 |
jonathanmaw | tristan: I think the problem I had was that there was no way to make search no search recursively. | 12:25 |
ssam2 | tristan: given that arch conditionals exist, I think everything does need it | 12:25 |
jonathanmaw | hrm, maybe not | 12:26 |
tristan | jonathanmaw, in what case do you not want search to search recursively though ? | 12:27 |
tristan | that's sort of the point | 12:27 |
jonathanmaw | ah, I see. It was because it was only meant to find a Scope.RUN dependency in one of the element's Scope.BUILD dependencies. Does an element's Scope.RUN dependencies automatically include the Scope.RUN dependencies of all its Scope.BUILD dependencies? | 12:28 |
tristanmaat | Is there a working bootstream project for gnu-toolchain somewhere? The gnu-toolchain branch in bootstream-tests doesn't seem to work :/ | 12:32 |
ssam2 | tristanmaat, try branch sam/buildstream of https://gitlab.com/baserock/definitions | 12:32 |
ssam2 | it got discussed a bit on the baserock-dev mailing list: https://listmaster.pepperfish.net/pipermail/baserock-dev-baserock.org/2017-June/thread.html | 12:33 |
ssam2 | which you should subscribe to, by the way -- https://listmaster.pepperfish.net/cgi-bin/mailman/listinfo/baserock-dev-baserock.org | 12:33 |
tristan | jonathanmaw, let's put it this way: If an element depends on another element for the purpose of building, it is assumed that element needs to run | 12:33 |
ssam2 | tristanmaat: that said, gnu-toolchain from buildstream-tests should work -- what issue are you having ? | 12:33 |
tristan | jonathanmaw, but aside from the typo in your question (I think), your answer is yes: An element's BUILD dependencies imply the RUN dependencies *of those BUILD dependencies* | 12:34 |
tristan | tristanmaat, I may have left that branch stale I dont know, I thought it was building... otherwise try the 'build-gnome' branch | 12:36 |
tristan | tristanmaat, When you say "it doesnt work", do you mean that stage2-make breaks at configure time, desiring the presence of aclocal-1.14 ? | 12:36 |
tristan | tristanmaat, and you have not pulled master in the last 2 hours or so ? | 12:37 |
tristan | (of buildstream, where I recently corrected that regression ?) | 12:37 |
tristanmaat | Hang on... | 12:37 |
tristanmaat | No new commits, so I am up-to-date | 12:39 |
tristanmaat | bst checkout gnu-toolchain.bst test | 12:40 |
tristanmaat | Loading: 030 | 12:40 |
tristanmaat | Resolving: 030/030 | 12:40 |
tristanmaat | Checking: 030/030 | 12:40 |
tristanmaat | 12:40 | |
tristanmaat | ERROR: Artifact missing for gnu-toolchain/gnu-toolchain/0e4186ada10a17bcee4f98acf7b29e879d3f3b73941b6987c | 12:40 |
tristanmaat | 0bdbbb9d1b5ef4d | 12:40 |
tristanmaat | Build actually works | 12:40 |
tristanmaat | Curiously | 12:40 |
tristan | Oh ? | 12:46 |
tristanmaat | ... I am misunderstanding checkout, right? It needs to be built first | 12:46 |
tristanmaat | Damnit | 12:46 |
tristan | tristanmaat, ok then you probably caught me in the act of regressing something else, and gnu-toolchain branch is not a problem | 12:46 |
tristan | I need 15 minutes to push this other fix and probably fix that at the same time... | 12:47 |
tristanmaat | Ok it's not priority at the moment anyway | 12:47 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 5 commits (last: sandbox.py: Added 'artifact' keyword argument to mark_directory() API) https://gitlab.com/BuildStream/buildstream/commit/9b32e105b1662781b87b6df21124f5e52097c2fc | 13:03 |
tristan | Ok... jonathanmaw... I fixed those issues and image deployments work again \o/ ! | 13:03 |
jonathanmaw | \o/ | 13:03 |
tristan | Also, I simplified the script element to remove those complex searches and replaced with self.search(Scope.BUILD) | 13:03 |
tristan | which will just work | 13:03 |
tristan | now let's see about a checkout regression | 13:03 |
tristan | tristanmaat, first, run `bst show gnu-toolchain.bst`, is the artifact really cached ? | 13:06 |
tristan | tristanmaat, second, I am not seeing that problem, *however*, I have another issue; bst checkout is not doing something I think it should be doing | 13:06 |
jonathanmaw | tristan: that's odd, I just did a test comparing the contents of self.dependencies(Scope.RUN), and dependencies(Scope.RUN) for every Scope.BUILD dependency of the element, and they looked very different. https://pastebin.com/tVgKXxb1 | 13:06 |
tristan | jonathanmaw, ... | 13:08 |
tristan | <tristan> Also, I simplified the script element to remove those complex searches and replaced with self.search(Scope.BUILD) | 13:08 |
tristan | jonathanmaw, read: self.search(Scope.BUILD...) | 13:08 |
tristan | not self.search(Scope.RUN) | 13:08 |
jonathanmaw | aha | 13:08 |
tristan | jonathanmaw, try it again but put Scope.BUILD in the self | 13:08 |
tristanmaat | tristan: Sorry, had my browser over this... I misunderstood the checkout command, I thought it would build if the artifacts weren't cached. | 13:09 |
tristan | tristanmaat, ahh, yeah it doesnt do that... but still, what you want; wont work | 13:10 |
tristanmaat | What is the other issue though? | 13:10 |
tristan | I'm a bit perplexed | 13:10 |
tristan | What should it do | 13:10 |
tristan | currently it checks out an artifact into a directory, which is normally expected to be pipeline output | 13:10 |
tristan | tristanmaat, but the thing is, gnu-toolchain.bst is a symbolic stack element | 13:11 |
tristan | So, if you run `bst checkout gnu-toolchain/stage2-binutils.bst` | 13:12 |
jonathanmaw | tristan: yep, doing Scope.BUILD made them come out identical | 13:12 |
tristan | You will get exactly that | 13:12 |
tristan | A checkout of stage2-binutils.bst in the specified directory | 13:12 |
tristan | But none of it's dependencies | 13:12 |
tristanmaat | So if you try gnu-toolchain.bst you get nothing? | 13:12 |
tristan | Yeah, you get the empty artifact | 13:13 |
tristanmaat | Oh. Handy | 13:13 |
tristan | and return status 0, no error, successful checkout of empty gnu-toolchain.bst | 13:13 |
tristan | jonathanmaw, anyway I cleaned it up, much less verbose this way :) | 13:14 |
tristan | So, I guess it would be logical to stage and integrate into the target directory | 13:14 |
tristan | And have a --scope argument like other bst commands do | 13:14 |
tristan | Because... `bst checkout` was originally intended to get at something complete... i.e. a pipeline will have outputs like system images, it works fine for that | 13:15 |
tristan | But... since those system images themselves will not have integration commands, or runtime dependencies... I guess it doesnt hurt the original use case, to make `bst checkout` do a stage of dependencies and integration steps | 13:16 |
tristan | Ok, I've been wanting to clean that up anyway, I'll do it now | 13:17 |
tristanmaat | I suppose as a workaround for now I can just checkout each individual element? | 13:18 |
tristan | tristanmaat, nah that's not really as good | 13:18 |
tristan | Well, you *could* | 13:18 |
tristan | keeping in mind, you only need the elements in stage3 | 13:18 |
tristan | tristanmaat, actually in your case it's pretty fine | 13:19 |
tristan | tristanmaat, you will want to run ldconfig yourself when chrooting into there, but it's probably not even really needed | 13:19 |
tristan | anyway I'll try to get a better bst checkout done now... | 13:20 |
tristanmaat | Good, I thought for a second I completely misunderstood how this works... | 13:20 |
tristan | got something almost working... | 13:47 |
tristan | nice | 13:53 |
tristan | Ok little bug, but it's unrelated and we can fix it later | 14:03 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 5 commits (last: sandbox.py: Dont ensure directories at mark_directory() time) https://gitlab.com/BuildStream/buildstream/commit/9ed44f16885343100407ef267df20b5b6c7aea7d | 14:08 |
tristan | tristanmaat, alright it's done, but there will be some caveats to that long term afaics, maybe we can do tarball checkouts that do it better after fixing #38 | 14:08 |
tristan | I.e. you will still have the same issue in regular flat checkouts, where suid/sgid bits will be stripped for security, and every file will belong to the checking out user | 14:09 |
tristan | but for your purpose it will work well | 14:09 |
tristan | and when we have awesome #38, then we can do similar, use introspection of the artifact cache real data to populate attributes in a tarball | 14:10 |
tristanmaat | At least I won't have to check out dependencies manually for the time being | 14:10 |
tristanmaat | :) | 14:10 |
* tristan has an issue when doing bst checkout libsecret.bst from the gnome project, oddly; it has managed to set some xattrs which I am not allowed to copy over to a checkout directory | 14:11 | |
tristan | nah, it works quite nicely :) | 14:11 |
tristan | OKAY ! | 14:12 |
tristan | I think all the current fires are out :) | 14:12 |
* ssam2 flicks ligher | 14:14 | |
ssam2 | *lighter | 14:14 |
tristan | alright dont burn the house down ssam2 ! | 14:21 |
tristan | So I guess, tomorrow I have to try and examine this artifact cache sharing stuff... | 14:22 |
tristan | And soon I have to get back to making GNOME builds work | 14:23 |
tristan | sigh, todo list... getting long... I also want to add a symbol that I can use to check the version of ostree with | 14:23 |
*** jude has quit IRC | 14:32 | |
*** tristan has quit IRC | 14:33 | |
*** tristan has joined #buildstream | 15:01 | |
*** tristanmaat is now known as tlater | 15:03 | |
*** jude has joined #buildstream | 15:33 | |
*** jonathanmaw has quit IRC | 15:46 | |
tlater | Is the assemble step in the build process managed entirely through plugins? | 16:19 |
ssam2 | there's some code in buildelement.py | 16:21 |
ssam2 | I think the plugins just provide the commands to run | 16:21 |
tlater | What commands should I run then? Morph doesn't have that implemented, at least not in the script | 16:22 |
tlater | -> To build a package | 16:23 |
ssam2 | each element knows what commands need to be run to build that element | 16:24 |
ssam2 | you'll probably need to add a method to the Element or BuildElement class to actually get that info out in a useful form | 16:24 |
tristan | I changed this a bit last week | 16:24 |
tristan | for flexibility, I broke down ->assemble into 3 stages: ->configure_sandbox(), ->stage() and ->assemble() | 16:25 |
tristan | You will want to add a new method to Elements which is only optionally implementable, to output the script with formatted variables and everything | 16:26 |
tristan | I suppose that not implementing it fires ImplError and the engine can just tell you that it cant do anything with elements which fire that | 16:27 |
tlater | That could go into the stage() step, then? | 16:27 |
tristan | stage is for filling up the build sandbox with data | 16:27 |
tristan | I dont think that you will want to use any of the existing methods there | 16:28 |
tristan | For the creating of source directories, the buildstream core already provides everything you need (because of Source->stage()) | 16:28 |
tristan | For serializing scripts, some creative design is needed | 16:29 |
*** jude has quit IRC | 16:30 | |
tristan | I.e. keep in mind that the fact that build elements run shell scripts, is really just a detail of build elements; other elements can do things without that at all | 16:30 |
*** jude has joined #buildstream | 16:30 | |
tristan | Also, I was hoping to allow script elements at least to use other interpretors than shell | 16:30 |
tlater | That last bit shouldn't be too hard, at least | 16:31 |
tristan | in any case, you probably only care about BuildElement implementations, and ensuring that stacks do a noop instead of telling the engine ImplError | 16:31 |
tristan | tlater, true but for the time being I dont think we support ScriptElement derived things for this | 16:31 |
tristan | ScriptElements dont accept any sources, and they let you stage multiple artifacts at different places in the sandbox | 16:32 |
tristan | So for instance, you can use host tools A, to deploy sysroot B into an image | 16:32 |
tristan | it would be nice to have other things than just shell in build elements, but again; it's not really useful, every source module I know of responds to shell commands to build :) | 16:33 |
* tristan has never typed: python3 ... ok lets run some python commands inside the interpretor to run 'make' :) | 16:33 | |
tristan | that said; other-than-shell stuff has happened in the past, RPM spec files support them for postinst scriptlets for instance | 16:34 |
tlater | So "keep support in mind, but don't implement"? | 16:34 |
tristan | Yeah, better to focus on only BuildElement and Stack implementation | 16:35 |
tristan | dont go on a tangent hehe | 16:35 |
tristan | starting from here: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/buildelement.py#L176 | 16:36 |
tristan | You can see how the BuildElement formats the commands, which were substituted in the function below with self.node_subst_list_element() | 16:37 |
tlater | Yup, that makes it a lot easier | 16:38 |
tlater | I was digging through Element | 16:38 |
tristan | Instead, those need to be outputted to some file handle the engine passes to it | 16:38 |
tristan | So, Element will certainly need an API for composing a script, which fires ImplError for most everything not concerned with that | 16:38 |
tristan | but BuildElement mostly needs to implement that | 16:39 |
tristan | Also, you will want the calling code in _pipeline.py, to be allowed to override variables | 16:39 |
tristan | Which might be tricky | 16:39 |
tristan | tlater, because you need to override variables during the load process, since they are normally substituted directly at instantiation time | 16:40 |
tristan | That is done here: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_pipeline.py#L279 | 16:41 |
tlater | How does loading work then? I thought variables could be defined there as well? | 16:41 |
tristan | directly during Pipeline.__init__() | 16:41 |
tristan | before the pipeline has any idea of what you want to do with it | 16:41 |
tristan | So, I look forward to seeing your creative solution to re-arrange that code so that it can be done, and doesnt look too spaghetti :D | 16:42 |
tlater | Wish me luch... x) | 16:43 |
tlater | *luck | 16:43 |
tlater | Damnit | 16:43 |
tristan | heh, it's almost 2am, so I wont get too deep into figuring solutions for that right now :) | 16:43 |
tlater | That's a fair point, I need to go anyway | 16:44 |
*** jude has quit IRC | 16:49 | |
*** jude has joined #buildstream | 16:49 | |
*** tlater has quit IRC | 16:56 | |
*** jude has quit IRC | 17:03 | |
*** ssam2 has quit IRC | 18:25 | |
*** tristan has quit IRC | 20:09 | |
gitlab-br-bot | buildstream: merge request (sam/traceback-fixes->master: Replace a few tracebacks with actual error messages) #24 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/24 | 22:28 |
gitlab-br-bot | buildstream: merge request (sam/traceback-fixes->master: Replace a few tracebacks with actual error messages) #24 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/24 | 22:32 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!