*** gtristan has joined #baserock | 04:04 | |
*** gtristan has quit IRC | 05:35 | |
*** gtristan has joined #baserock | 05:39 | |
*** paulwaters_ has joined #baserock | 07:15 | |
*** noisecell has joined #baserock | 07:52 | |
*** jude__ has joined #baserock | 08:02 | |
*** gtristan has quit IRC | 08:16 | |
*** jude__ has quit IRC | 08:49 | |
*** jude_ has joined #baserock | 08:54 | |
*** gtristan has joined #baserock | 09:05 | |
*** rdale has joined #baserock | 09:08 | |
benbrown_ | paulsher1ood: a bump in the artifact version is causing builds to fail (note some of the failures in the pipeline are due to timeout): https://gitlab.com/benjamb/ybd/pipelines/6767010 | 09:27 |
---|---|---|
paulsher1ood | well, that looks like git mirroring is failing, not builds. | 09:28 |
paulsher1ood | maybe git.baserock.org has not been playing along? | 09:29 |
benbrown_ | there are some gcc failures in that pipeline also | 09:31 |
benbrown_ | ubuntu 16.10, for example | 09:31 |
paulsher1ood | benbrown_: i find it hard to believe that bumping *alone* causes that... is that pipeline also running your git-lfs stuff? | 09:34 |
gtristan | paulsher1ood, build essential changes are rare | 09:35 |
benbrown_ | paulsher1ood: No, I created a separate branch that only bumps the version | 09:35 |
benbrown_ | that's what that pipeline's from | 09:35 |
gtristan | different host tools + cache key revision change = can be explode | 09:35 |
* paulsher1ood doesn't see how | 09:38 | |
*** jonathanmaw has joined #baserock | 09:39 | |
gtristan | paulsher1ood, so you always build build essential on one host for a year... then you move the build process to a new host with different host tools | 09:40 |
ironfoot | he is right, i've seen in the past some gcc versions (in the host) not being able to build our stage1-gcc | 09:40 |
gtristan | paulsher1ood, and since you dont change the artifact cache revision, it never rebuilds build-essential | 09:40 |
gtristan | until one day you do | 09:40 |
gtristan | and boom ! | 09:40 |
paulsher1ood | aha | 09:41 |
paulsher1ood | what's the solution, then? | 09:41 |
gtristan | *cough* buildstream :) | 09:41 |
ironfoot | xD | 09:41 |
ironfoot | there is a no-kbas build though | 09:41 |
ironfoot | which should be testing that? | 09:41 |
paulsher1ood | gtristan: no, i mean, what's the solution *today* | 09:41 |
paulsher1ood | doesn't test it on all hosts | 09:42 |
gtristan | paulsher1ood, I dont have one handy, can try to debug build-essential to pass on the new host | 09:42 |
gtristan | Or build it on an older host and upload the artifact, that's fastest | 09:42 |
gtristan | build it in a chroot into a flatpak freedesktop sdk checkout | 09:43 |
paulsher1ood | don't the pipelines already upload to kbas? so if any host built with the bump, re-running the rest should 'succeed' | 09:43 |
gtristan | only if those gitlab runners ever succeeded in building build-essential in the first place | 09:44 |
gtristan | fix the runners ? | 09:44 |
gtristan | to use an older toolchain known to work ? | 09:44 |
paulsher1ood | https://gitlab.com/benjamb/ybd/pipelines/6767010 - two of the builds succeeded | 09:45 |
paulsher1ood | oh, but maybe branches don't upload | 09:45 |
noisecell | also, no-kbas succeed | 09:45 |
benbrown_ | paulsher1ood: forks won't upload (I'm in a fork) | 09:46 |
paulsher1ood | right | 09:46 |
benbrown_ | I could push the branch to baserock/ybd? | 09:46 |
benbrown_ | I assume the kbas password is set in project cars? | 09:46 |
benbrown_ | vars* | 09:46 |
gtristan | debian stretch fails, that's strange, I've passed here (but have not apt-get upgraded in a while) | 09:46 |
paulsher1ood | benbrown_: yup | 09:46 |
paulsher1ood | in any case, this is a definitions issue, not to do with ybd artifact-bump | 09:47 |
noisecell | gtristan, both debian fails, the odd thing is that ubuntu 16.10 fails when ubuntu 14.10 succeed | 09:47 |
gtristan | odd indeed | 09:47 |
gtristan | paulsher1ood, it's not really a definitions issue, it's an issue with relying on non-deterministic host tooling | 09:48 |
noisecell | paulsher1ood, I don't think ybd or definitions are the issue. I guess these runners do not have the correct packages to build | 09:48 |
noisecell | do you configure the runners with the same package version? | 09:48 |
paulsher1ood | gtristan: our definitions have always done that, hence it's a definitions issue :) | 09:48 |
benbrown_ | jessie was well on it's way wrt building, it just hit the timeout | 09:48 |
gtristan | some of these failures I'm not sure of though | 09:49 |
benbrown_ | stretch did fail at stage1-gcc | 09:49 |
ironfoot | there were various weird errors in that pipeline. Let's make sure the problem here is not something else, like docker, or gitlab | 09:49 |
gtristan | python2-markdown ? | 09:49 |
paulsher1ood | (it's inherent in our urge to be able to bootstrap, iiuc) | 09:49 |
benbrown_ | gtristan: timeout, given my build was in a fork with different project settings | 09:49 |
ironfoot | it's not a definitions issue if they were designed to run in a controled host environment | 09:49 |
benbrown_ | that would likely have passed | 09:49 |
noisecell | ironfoot, I got your point | 09:50 |
paulsher1ood | ironfoot: if they were designed for a controlled host environment, we wouldn't need a three-stage-bootstrap? | 09:50 |
gtristan | paulsher1ood, we would still want one I believe | 09:50 |
gtristan | maybe 2 stages would be enough | 09:50 |
gtristan | the third stage is mostly for verification anyway | 09:51 |
paulsher1ood | maybe my memory is fualty, but at the time, i understood the design of the bootstrapping of build-essential to be expressly about achieving isolation from host tools | 09:51 |
ironfoot | which is a different thing | 09:52 |
gtristan | there is a comment in build-essential saying that the third gcc is mostly to verify that we can build ourselves (as the first stage will be compiling the cross compiler, and second stage is cross-compiling the native compiler, third is to build native compiler with native compiler) | 09:53 |
gtristan | (even though there is no real cross, it's done that way for bootstrap purpose) | 09:53 |
paulsher1ood | is it? all i'm saying is that the 'fix' for this needs to be done in definitions - there isn't a ybd fix for it | 09:53 |
gtristan | meh, yocto does compile the base with host tools, and they spend much effort to bless supported distro versions as they roll out | 09:54 |
gtristan | this would be the same effort | 09:54 |
* paulsher1ood would prefer to minimise the effort | 09:55 | |
gtristan | Ok, give me 2 weeks, tops. | 09:55 |
noisecell | gtristan, why are we talking about the 3rd stage of gcc? when the one which fail are 1st and 2nd in the pipelines we are looking at? | 09:55 |
gtristan | :) | 09:55 |
ironfoot | gtristan: to finish deployment in BuildStream? | 09:56 |
noisecell | :) | 09:56 |
ironfoot | gtristan: you crazy, and will regret it | 09:56 |
gtristan | noisecell, because paulsher1ood said we may not need a third stage if we had controlled host environment, I think it's still desirable, or unnescessary to remove at least | 09:56 |
gtristan | ironfoot, I wont regret it if I _actually get 2 weeks_ | 09:57 |
paulsher1ood | gtristan: i didn't say that. i said that the original design was to handle uncontrolled host environment | 09:57 |
gtristan | ironfoot, we build the whole GNOME stack, and it took me one day to convert another unrelated definitions using project | 09:57 |
gtristan | paulsher1ood, ok, I was referring to: | 09:58 |
gtristan | <paulsher1ood> ironfoot: if they were designed for a controlled host environment, we wouldn't need a three-stage-bootstrap? | 09:58 |
paulsher1ood | gtristan: has any buildstream output been run|tested ? | 09:58 |
paulsher1ood | gtristan: i was asking the question :) | 09:58 |
gtristan | paulsher1ood, I need to write a deployment | 09:58 |
gtristan | :) | 09:58 |
gtristan | that's basically it | 09:58 |
gtristan | Besides that, the 2 weeks, is so that we cover the artifact cache sharing | 09:58 |
gtristan | then we have core features supported pretty much | 09:59 |
ironfoot | so yes. Ok I agree, in 2 weeks we might have something better, but I'd say paulsher1ood wounld appreciate a solution today | 09:59 |
gtristan | probably that fits in less time, but hey leave some buffer in the estimates | 09:59 |
gtristan | solution for today, build somewhere it works and upload the artifacts ? | 09:59 |
gtristan | seems the quickest way out | 10:00 |
paulsher1ood | to be clear i'm happy for buildstream to become the solution here... i just don't want people staring at broken pipelines for weeks :) | 10:00 |
ironfoot | (assumes that everyone else is happy to consume those artifacts) | 10:00 |
paulsher1ood | jjardon: are you around? | 10:00 |
gtristan | ironfoot, that is absolutely no change from yesterday, or pre-artifact-version-bump | 10:01 |
gtristan | ironfoot, that assumption I mean | 10:01 |
gtristan | not a regression | 10:01 |
ironfoot | right, yeah, you are right | 10:01 |
paulsher1ood | jjardon: are artifacts only uploaded by master ybd pipelines currently? | 10:01 |
* paulsher1ood can run a build with artifact-version bump elsewhere and upload, if that's the consensus | 10:03 | |
jjardon | Let me check | 10:03 |
jjardon | Yeah, all the artifacts from any branch are uploaded | 10:05 |
* jjardon reads context | 10:06 | |
benbrown_ | I have https://gitlab.com/baserock/ybd/pipelines/6775393 kicked off already, should do the job | 10:07 |
paulsher1ood | w00t, then :) | 10:08 |
ironfoot | heh | 10:08 |
paulsher1ood | gtristan: are you aware of benbrown_'s git-lfs work? presumably buildstream would benefit from equivalent functionality | 10:10 |
jjardon | benbrown if you run in a fork, increase the timeout of your builds; it's only 60 min by default, I think | 10:12 |
gtristan | I was thinking about that yeah | 10:12 |
gtristan | benbrown_, can you explain what exactly the thing is ? | 10:12 |
benbrown_ | jjardon: I did, but only to 300 | 10:12 |
gtristan | benbrown_, I'm curious specifically, if it has to be used with a specific repo, or if it can be all handled client side | 10:12 |
jjardon | benbrown is the timeout the only problem? | 10:12 |
jjardon | Yeah 5 hours is probably not enough to build without kbas | 10:14 |
ironfoot | jjardon: <benbrown_> paulsher1ood: a bump in the artifact version is causing builds to fail (note some of the failures in the pipeline are due to timeout): https://gitlab.com/benjamb/ybd/pipelines/6767010 | 10:14 |
ironfoot | the problem being that some hosts fail to build definitions from scratch | 10:15 |
jjardon | Oh! | 10:16 |
benbrown_ | gtristan: no special requirements on repos | 10:16 |
benbrown_ | other than what lfs already requires to setup | 10:17 |
jjardon | Yeah new versions of GCC can not build old versions; I think i saw that bug somewhere | 10:17 |
benbrown_ | we just check the .gitattributes for lfs filters and run the appropriate fetch/pull commands | 10:17 |
ironfoot | jjardon: solution proposed: Upload the artifact needed to kbas | 10:17 |
* jjardon will disable kbas in those jobs to see where we are at | 10:18 | |
ironfoot | (which is being done atm by the pipeline running in definitions/ybd) | 10:18 |
*** ssam2 has joined #baserock | 10:18 | |
*** ChanServ sets mode: +v ssam2 | 10:18 | |
jjardon | Sure, but the pipeline will still be lying as it seems it supports all those distribution but the only thing is doing is reusing the kbas cache | 10:19 |
gtristan | benbrown_, so what does this mean... the .gitattributes in a given checkout might say "this was cloned from a git lfs repo" ? | 10:23 |
ironfoot | jjardon: I agree | 10:23 |
gtristan | benbrown_, I dont really understand... if I want to use git lfs can I use it locally on a git repo that doesnt have that ? Or... can I use git without the lfs extension installed when cloning/checking out from a repo that does not have that thing in the .gitattributes ? | 10:24 |
benbrown_ | gtristan: The .gitattributes will specify files that are managed by git lfs, if that doesn't exist, calls to git-lfs are not necessary | 10:28 |
gtristan | benbrown_, and if it does exist, calls to git-lfs are a hard requirement ? or not ? | 10:28 |
benbrown_ | regular gits/definitions won't need to have git-lfs installed | 10:28 |
benbrown_ | not a hard requirement | 10:29 |
gtristan | will just be sub-optimal | 10:29 |
gtristan | Ok | 10:29 |
gtristan | benbrown_, sounds good :) | 10:29 |
gtristan | thanks ! | 10:29 |
benbrown_ | gtristan: there's a pull request in baserock/ybd if you want to check it out | 10:29 |
gtristan | linky ? | 10:30 |
paulsher1ood | https://gitlab.com/baserock/ybd/merge_requests/313 | 10:34 |
benbrown_ | too fast for me | 10:34 |
* paulsher1ood would like to see the pipeline for that succeed | 10:34 | |
benbrown_ | gtristan: description needs an update | 10:34 |
benbrown_ | paulsher1ood: working on it :) | 10:34 |
paulsher1ood | :) | 10:34 |
gtristan | benbrown_, any reason why you pass --global to git config at all ? | 10:36 |
gtristan | hence the need for os.environ['GIT_CONFIG_NOSYSTEM'] = "1", and even if so, causing pollution of user's global git configuration ? | 10:36 |
benbrown_ | gtristan: `git lfs install` installs to global (user) config, primarily to cover the fact that I wasn't passing --local to install previously | 10:37 |
benbrown_ | but seems like a sane option, given anyone could sudo `git lfs install` and find things not working | 10:37 |
* benbrown_ put a backtick in the wrong places | 10:38 | |
gtristan | So you also make a hard requirement on git lfs being installed | 10:38 |
gtristan | if you read it in the .gitattributes | 10:38 |
gtristan | but you say that's not necessary right ? | 10:39 |
gtristan | just slower ? | 10:39 |
benbrown_ | gtristan: Well, it's a requirement if you actually have a repository in your definitions that is managed by git lfs | 10:40 |
paulsher1ood | so iiuc, try to do the right thing if we encounter a repo containing the magic, error out if it fails (eg because git lfs not installed) | 10:42 |
benbrown_ | yh | 10:42 |
benbrown_ | but yes, given it reads .gitattributes it will be a little slower than not checking | 10:43 |
gtristan | benbrown_, that's what I was asking | 10:46 |
gtristan | benbrown_, so I _cannot_ use git without lfs for a remote repo that uses git lfs | 10:46 |
gtristan | with the expense of caching large files without extra smartness locally | 10:47 |
benbrown_ | right, but this check is on checkout | 10:47 |
gtristan | yes I'm wondering how to handle it in buildstream, where we do preflight checks for host tooling | 10:47 |
gtristan | but this cannot be a preflight check, because you need to access network or have the git mirror already cached | 10:47 |
gtristan | so has to be a delayed error | 10:48 |
benbrown_ | gtristan: there is currently no caching of those binaries though as a result, git-lfs requires/expects a remote server | 10:48 |
* gtristan was hopeful to be able to avoid that, and just add a warning that the user should install git lfs to speed things up | 10:48 | |
benbrown_ | and errors out when fetching from a local path | 10:48 |
benbrown_ | so cannot be used with the current git caching mechanism, without some hacking | 10:48 |
gtristan | ewww | 10:49 |
benbrown_ | unless we somehow have ybd serve gits to itself with some magic | 10:49 |
gtristan | so you mean, you only get a partial clone ? | 10:49 |
benbrown_ | or patch git-lfs | 10:49 |
gtristan | even with git clone --mirror ? | 10:49 |
benbrown_ | git clone --mirror does no fetching of lfs binaries | 10:49 |
gtristan | :-/ | 10:49 |
gtristan | But then, there is a way to explicitly fetch the ones you need for a given commit into your mirror ? | 10:50 |
benbrown_ | yes | 10:50 |
benbrown_ | which is what the series currently does | 10:50 |
gtristan | So at least you dont need network at checkout time | 10:50 |
benbrown_ | oh, into your mirror | 10:50 |
gtristan | Yes. | 10:50 |
benbrown_ | that's the thing, how do I fetch from that mirror? | 10:50 |
benbrown_ | given lfs expects an remote path | 10:51 |
gtristan | I want to guarantee that after running `bst fetch`, I dont need network to build | 10:51 |
gtristan | benbrown_, and that remote path is in .gitattributes correct ? | 10:51 |
benbrown_ | gtristan: no, it get's it from remote.<remote>.url | 10:51 |
gtristan | benbrown_, can be done with git show rev:file or smth similar to parse that file for a given sha without doing any checkout | 10:52 |
gtristan | Ok anyway, I'll have to figure on this, want to be sure I can cache, only the binaries I need for a given revision in the mirror, and then no network at checkout time | 10:52 |
benbrown_ | (remote.<remote.url being in the gitdir .git/config) | 10:54 |
*** ctbruce has joined #baserock | 11:04 | |
jjardon | paulsher1ood: benbrown_ https://gitlab.com/baserock/ybd/merge_requests/316 | 11:43 |
paulsher1ood | approved | 11:46 |
noisecell | jjardon, why not to add gcc-4.9 to these runners? | 11:47 |
ironfoot | not to the runners, to the docker images used, iirc | 11:48 |
ironfoot | these images are the latest published in docker hub, and represent the latest versions of some distros | 11:49 |
ironfoot | we could ofcourse require the installation of an older version of gcc-4.9 in the install deps script | 11:49 |
ironfoot | s/of gcc-4.9/of gcc (like 4.9)/ | 11:50 |
jjardon | because is not what they ship by default, and actually not sure those distros will even have those versions in their repos | 11:50 |
jjardon | debian strecht (testing) doesnt have 4.9, for example | 11:51 |
noisecell | jjardon, I was checking that, ubuntu 16.10 does have it http://packages.ubuntu.com/search?keywords=gcc-4.9 | 11:51 |
noisecell | jjardon, and Im sure i've installed 4.x version in stretch in the past | 11:52 |
jjardon | https://packages.debian.org/search?keywords=gcc-4.9 | 11:52 |
noisecell | jjardon, https://packages.debian.org/stretch/gcc | 11:52 |
noisecell | bad link sorry | 11:53 |
jjardon | paulsher1ood: cheers | 11:53 |
jjardon | even if we can force people to install old gcc versions, I think the proper solution for is https://gitlab.com/baserock/definitions/issues/8 | 11:57 |
ironfoot | is that the real fix for ybd's CI? | 11:59 |
noisecell | jjardon, ok, it was more curiosity and trying to save 2 of your tests | 11:59 |
* ironfoot hides | 11:59 | |
jjardon | ironfoot: yes, because is not a problem in ybd, Its a problem in definitions | 12:00 |
noisecell | jjardon, confirmed, I can not find a 4.x gcc package in debian stretch (I think I did force it and then upgrade to 6.x after a period of time) | 12:05 |
noisecell | jjardon, sorry about the noise | 12:05 |
jjardon | noisecell: np :) | 12:41 |
*** noisecell has quit IRC | 12:52 | |
*** noisecell has joined #baserock | 12:53 | |
*** jude_ has quit IRC | 14:18 | |
*** cosm has quit IRC | 14:20 | |
*** cosm has joined #baserock | 14:23 | |
*** jude_ has joined #baserock | 14:26 | |
jjardon | Hi, is there any env variable to indicate the src folder when writing .morph files? | 14:48 |
ssam2 | I think you have to work it out based on chunk name | 14:49 |
ssam2 | not sure if that's even provided anywhere in the environment | 14:49 |
* ironfoot wonders what's the usecase | 14:49 | |
ironfoot | with src folder you mean the folder with the source code taken from git? or the classic /src (or similar) folder used to store all things | 14:51 |
*** gtristan has quit IRC | 15:00 | |
*** jude_ has quit IRC | 15:14 | |
*** gtristan has joined #baserock | 15:25 | |
*** jude_ has joined #baserock | 15:29 | |
*** jonathanmaw has quit IRC | 16:25 | |
jjardon | taken from git | 16:43 |
jjardon | for some modules I have to 'cd' to folders so Id like to com back again to the 'default' folder | 16:44 |
ssam2 | can you use `cd -` ? | 16:45 |
ironfoot | every new command (new element in list of commands) will start in the source folder | 16:46 |
ironfoot | so, if you can do that, you won't have to cd to this folder | 16:47 |
ironfoot | - | | 16:48 |
ironfoot | cd foo | 16:48 |
ironfoot | make | 16:48 |
ironfoot | - cmd from default folder | 16:48 |
*** noisecell has quit IRC | 16:58 | |
gtristan | ironfoot, thats a bit impractical when you just spent ~20 lines of export FOO, BAR, BAZ, EVERY_ENV_VAR_UNDER_THE_SUN :) | 17:12 |
gtristan | which, I suspect is found in the same morph hehe | 17:12 |
SotK | add an "export SOURCE_DIR=`pwd`" to that pile of exports before the cd? | 17:14 |
* gtristan takes that approach usually yes | 17:17 | |
*** ctbruce has quit IRC | 17:19 | |
*** ssam2 has quit IRC | 18:09 | |
*** jude_ has quit IRC | 18:35 | |
jjardon | thanks everyone | 19:14 |
*** jude_ has joined #baserock | 19:19 | |
*** rdale has quit IRC | 19:56 | |
*** jude_ has quit IRC | 21:15 | |
*** gtristan has quit IRC | 21:21 | |
*** jude_ has joined #baserock | 21:22 | |
*** jude_ has quit IRC | 21:26 | |
*** jjardon_ has joined #baserock | 23:10 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!