*** tristan has joined #buildstream | 03:18 | |
*** ChanServ sets mode: +o tristan | 04:34 | |
*** jonathanmaw has joined #buildstream | 08:32 | |
tristan | So juergbi... umm, when you say the python ostree push only works with archive-z2 repos, do you mean local ones or remotes ? | 09:49 |
---|---|---|
juergbi | tristan: looks like both, unfortunately | 09:50 |
juergbi | for remote ones it's definitely the case, i will recheck for local ones | 09:50 |
juergbi | it failed here with local bare-user and remote archive-z2, though | 09:50 |
tristan | alright, well the shell one is gonna be fine I suppose | 09:50 |
juergbi | yes, that appears to work fine and is very simple (if dependency of sshfs is ok) | 09:51 |
juergbi | however, it might be a bit slow | 09:51 |
juergbi | my plan is to get it all working with the shell script for now and worry about optimizations later | 09:51 |
tristan | I doubt python will be significantly faster on I/O bound tasks than shell | 09:52 |
tristan | juergbi, anyway note that, if we're going to call out to the shell, we need to use this craziness: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/plugin.py#L370 | 09:53 |
tristan | But if the artifact cache has an element in context, it can use element.call() | 09:53 |
juergbi | ah, for suspend handling and co. | 09:53 |
juergbi | right, element is available, so should be able to do that | 09:54 |
tristan | yeah, and also automatic redirection of stdout/stderr to the appropriate log files | 09:54 |
juergbi | (for initial test i just used subprocess.check_call() ) | 09:55 |
tristan | sure :) | 09:55 |
juergbi | hm, i added this to _ostree.py, though, which doesn't know about elements. have to think about the cleanest way here | 09:55 |
juergbi | btw: did you already have a config key name in mind for the url to the remote artifact repo? | 09:56 |
juergbi | using remote-artifacturl for now. there might be a better choice | 09:56 |
tristan | myeah... well we *could* refactor the shell calling out code to an internal utility function and preserve the API | 09:56 |
tristan | juergbi, maybe 'artifact-share' ? | 09:57 |
tristan | juergbi, anyway I think I will pick up the pieces when you get on vacation and tidy it up then | 09:58 |
tristan | Also I have to think about adding some docs instructing users how to setup and host an artifact cache | 09:58 |
tristan | And, I would very much like to use the gpg signing features if possible | 09:58 |
juergbi | ok. i hope to have push and pull working by the end of the day and will work on tidying up, tests, docs Monday and Tuesday. not sure how far i'll get | 09:59 |
tristan | :) | 09:59 |
juergbi | yes, that's still an open point | 09:59 |
tristan | Right now we do use it for validating incomming data with the ostree source | 09:59 |
tristan | but letting the user sign with their own key is a bit... I dont know | 10:00 |
tristan | I guess we trust the user's key on the remote side just by virtue of having their ssh public key | 10:00 |
tristan | but then how does that user make their public gpg key available for other users to validate downloaded artifacts with | 10:01 |
tristan | ehh | 10:01 |
tristan | juergbi, just dont think about it for now | 10:01 |
tristan | So, just for the benefit of the channel.... Automated GNOME moduleset conversions !!!!!!!!!!!!!!! | 10:03 |
tristan | git clone https://gnome7.codethink.co.uk/gnome-modulesets.git | 10:04 |
tristan | cd gnome-modulesets | 10:04 |
tristan | bst track meta-gnome-apps-tested.bst && bst build meta-gnome-apps-tested.bst | 10:04 |
tristan | And presto... everything builds, *except* for gnome-multi-writer.bst | 10:05 |
tristan | jjardon[m], ^^^^^^^^^^^ | 10:05 |
tristan | :) | 10:05 |
tristan | They are converted every 10 minutes | 10:05 |
tristan | If no changes, no new commits are made | 10:06 |
juergbi | \o/ | 10:08 |
tristan | I'm pretty happy about that anyway | 10:09 |
tristan | Still a ways to go, next I want to automatically generate a side branch (maybe at less frequent intervals) with the results of `bst track` committed | 10:09 |
* jjardon[m] opens bottle of celebration | 10:10 | |
tristan | So I can point to an exact commit in https://gnome7.codethink.co.uk/gnome-modulesets.git and say "This will behave exactly like this" | 10:10 |
tristan | Then, I have to start thinking about generating the whole thing with some extra data on the side which says "include these modules in a GNOME flatpak runtime" | 10:11 |
tristan | And try running flatpaks on the generated runtime | 10:11 |
tristan | maybe cgit would be nice too | 10:14 |
tristan | looks like --fetchers 20 starts to give me problems from git.gnome.org (connection reset by peer) | 12:06 |
* tristan relaxes a bit on the fetches :) | 12:06 | |
* paulsher1ood wonders what gnome7.codethink.co.uk is | 12:08 | |
persia | When someone has time, I'd like to argue against any sort of autocommit. The thrust of the argument is "have you seen what that looks like in any of the git history visualisation tooling?". | 12:08 |
tristan | So I am using gnome7.codethink.co.uk for 2 things now (and added the track task to it too !) | 12:09 |
tristan | paulsher1ood, it's one of the arm machines we're not using right now... | 12:09 |
paulsher1ood | ah, ok | 12:10 |
tristan | So A.) I use it to run a nightly debian multistrap of debian testing on 4 arches | 12:10 |
tristan | arm, aarch64, i386 and x86_64 | 12:10 |
tristan | x86_64 I have tested and works as a base to build GNOME on top of | 12:10 |
paulsher1ood | ooh, cool | 12:10 |
tristan | This multistrap automation ends up in an ostree repo | 12:10 |
tristan | which we host on gnome7 | 12:10 |
tristan | And is revisioned, any updates adds new refs to the branches | 12:11 |
tristan | The other thing I'm doing is a continuous conversion process of jhbuild modulesets, which I've got up and running today | 12:11 |
tristan | The idea is that we follow GNOME jhbuild modulesets with this migration, so that at any given time, someone can try a bst build of GNOME, based on a conversion of the modulesets | 12:12 |
tristan | That happens every 10 minutes, but does nothing to the repo if nothing has changed | 12:12 |
tristan | Finally, (just now) I have updated it so that every day (at this time), it will run a full `bst track` on everything it imported | 12:13 |
tristan | Which basically means I can use this to demonstrate something I'm confident | 12:13 |
tristan | "Hey, I built this <ref> of gnome-modules.git, that means it will also build for anyone else, exactly as it did for me" | 12:13 |
tristan | Which is something about repeatability that I want to use in my monday blog post | 12:14 |
tristan | persia, Point taken, However... This is not going to be a git history that anyone will use normally, it is strictly "owned" by the machine | 12:14 |
tristan | persia, however it allows the GNOME release team to decide on their own, when they want to do a switch, and start off from a clean conversion to create their own | 12:15 |
tristan | So fwiw, The converted modulesets end up in: https://gnome7.codethink.co.uk/gnome-modulesets.git/ | 12:15 |
tristan | That can be cloned | 12:15 |
tristan | The master branch is the every 10 minutes following GNOME modulesets | 12:16 |
persia | tristan: Regardless of "owner", as long as no user is expected to operate their git tooling in that git history (either directly or as a result of cloning), my concern is addressed. Note that this includes not having this git history stored on any development machine, but only remote automation (including virtual "remote"). | 12:16 |
tristan | The "tracked" branch is the nightly run which tracks all the refs of everything. | 12:16 |
tristan | persia, I cant control everything, but I can say that it's important to me that for the following months, the history stays in tact (i.e. I want to be able to say: This auto commit on May 26th, builds exactly the same way on August 3rd) | 12:18 |
tristan | So I dont want to mess with that and auto-squash history | 12:18 |
tristan | however, nobody can every commit to this repo | 12:18 |
tristan | that is certain | 12:18 |
tristan | *ever | 12:18 |
tristan | At some point, GNOME will have to decide to make a switch, and after then, they can modify the new repo that results however they like | 12:19 |
persia | If no human can commit to the autocommit location, that's less bad, although humans might commit to clones, which can end up terrible, depending. I suppose it won't be horrid if the folk deciding to consume the results clean up the history before making it available to others (as that decision is likely to involve human commits). | 12:20 |
tristan | persia, indeed, it should be an rm -rf .git + git init . I think | 12:24 |
tristan | persia, and you can rest assured that if and when this switch takes place, it will be done by the GNOME release team | 12:25 |
persia | tristan: In that case, I shall also rest contented :) | 12:25 |
tristan | of course some rando person can consume it in some way, but the official modulesets are controlled by the release team | 12:25 |
*** tristan has quit IRC | 12:33 | |
jonathanmaw | hrm, I'm going to have to think about whether we run integration commands when staging elements - staging them to somewhere other than the sandbox root makes running the integration commands have unexpected results | 14:50 |
jonathanmaw | I wonder if I can put sandboxes inside sandboxes | 14:50 |
persia | +1 to sandboxes within sandboxes being useful. | 15:01 |
persia | One of the reliability issues with tooling like apt and yum is that the integration commands run in an uncontrolled environment, and therefore have difficult-to-predict effects. That said, some of the integration activities depend on the state of the entire system, and so necessarily run in an environment with limited control (e.g. updating the loader cache). | 15:03 |
persia | The idea to address this that caused me the most difficulty in finding outstanding issues consisted of four different bits of "integration" code: one that ran in a protected environment whenever the element was being integrated into a system, one that ran over a system whenever a specific element was being integrated (so controlled by system, not by element), one that ran over a system when that system was being prepared for instantiation, and one | 15:05 |
persia | that ran on first boot of an instantiation ofa system. | 15:05 |
persia | This set of constraints allows the developer the most flexibility in controlling what code must have reliable results, and what does not. That said, it is painfully complex, and perhaps hard to document. | 15:06 |
*** jonathanmaw has quit IRC | 16:49 | |
*** tristan has joined #buildstream | 20:43 | |
* tristan pops in at 5:40am and notes that (while sufficiently inebriated)... I'm glad that jonathan arrived at that conclusion :) | 20:47 | |
tristan | I avoided introducing the question of whether or not to run integration commands, and what it means when the staged elements are not at /, in our conversation the other day | 20:48 |
tristan | because it was already enough to digest | 20:48 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!