IRC logs for #baserock for Thursday, 2017-03-02

*** gtristan has joined #baserock04:04
*** gtristan has quit IRC05:35
*** gtristan has joined #baserock05:39
*** paulwaters_ has joined #baserock07:15
*** noisecell has joined #baserock07:52
*** jude__ has joined #baserock08:02
*** gtristan has quit IRC08:16
*** jude__ has quit IRC08:49
*** jude_ has joined #baserock08:54
*** gtristan has joined #baserock09:05
*** rdale has joined #baserock09:08
benbrown_paulsher1ood: a bump in the artifact version is causing builds to fail (note some of the failures in the pipeline are due to timeout): https://gitlab.com/benjamb/ybd/pipelines/676701009:27
paulsher1oodwell, that looks like git mirroring is failing, not builds.09:28
paulsher1oodmaybe git.baserock.org has not been playing along?09:29
benbrown_there are some gcc failures in that pipeline also09:31
benbrown_ubuntu 16.10, for example09:31
paulsher1oodbenbrown_: i find it hard to believe that bumping *alone* causes that... is that pipeline also running your git-lfs stuff?09:34
gtristanpaulsher1ood, build essential changes are rare09:35
benbrown_paulsher1ood: No, I created a separate branch that only bumps the version09:35
benbrown_that's what that pipeline's from09:35
gtristandifferent host tools + cache key revision change = can be explode09:35
* paulsher1ood doesn't see how09:38
*** jonathanmaw has joined #baserock09:39
gtristanpaulsher1ood, so you always build build essential on one host for a year... then you move the build process to a new host with different host tools09:40
ironfoothe is right, i've seen in the past some gcc versions (in the host) not being able to build our stage1-gcc09:40
gtristanpaulsher1ood, and since you dont change the artifact cache revision, it never rebuilds build-essential09:40
gtristanuntil one day you do09:40
gtristanand boom !09:40
paulsher1oodaha09:41
paulsher1oodwhat's the solution, then?09:41
gtristan*cough* buildstream :)09:41
ironfootxD09:41
ironfootthere is a no-kbas build though09:41
ironfootwhich should be testing that?09:41
paulsher1oodgtristan: no, i mean, what's the solution *today*09:41
paulsher1ooddoesn't test it on all hosts09:42
gtristanpaulsher1ood, I dont have one handy, can try to debug build-essential to pass on the new host09:42
gtristanOr build it on an older host and upload the artifact, that's fastest09:42
gtristanbuild it in a chroot into a flatpak freedesktop sdk checkout09:43
paulsher1ooddon't the pipelines already upload to kbas? so if any host built with the bump, re-running the rest should 'succeed'09:43
gtristanonly if those gitlab runners ever succeeded in building build-essential in the first place09:44
gtristanfix the runners ?09:44
gtristanto use an older toolchain known to work ?09:44
paulsher1oodhttps://gitlab.com/benjamb/ybd/pipelines/6767010 - two of the builds succeeded09:45
paulsher1oodoh, but maybe branches don't upload09:45
noisecellalso, no-kbas succeed09:45
benbrown_paulsher1ood: forks won't upload (I'm in a fork)09:46
paulsher1oodright09:46
benbrown_I could push the branch to baserock/ybd?09:46
benbrown_I assume the kbas password is set in project cars?09:46
benbrown_vars*09:46
gtristandebian stretch fails, that's strange, I've passed here (but have not apt-get upgraded in a while)09:46
paulsher1oodbenbrown_: yup09:46
paulsher1oodin any case, this is a definitions issue, not to do with ybd artifact-bump09:47
noisecellgtristan, both debian fails, the odd thing is that ubuntu 16.10 fails when ubuntu 14.10 succeed09:47
gtristanodd indeed09:47
gtristanpaulsher1ood, it's not really a definitions issue, it's an issue with relying on non-deterministic host tooling09:48
noisecellpaulsher1ood, I don't think ybd or definitions are the issue. I guess these runners do not have the correct packages to build09:48
noisecelldo you configure the runners with the same package version?09:48
paulsher1oodgtristan: our definitions have always done that, hence it's a definitions issue :)09:48
benbrown_jessie was well on it's way wrt building, it just hit the timeout09:48
gtristansome of these failures I'm not sure of though09:49
benbrown_stretch did fail at stage1-gcc09:49
ironfootthere were various weird errors in that pipeline. Let's make sure the problem here is not something else, like docker, or gitlab09:49
gtristanpython2-markdown ?09:49
paulsher1ood(it's inherent in our urge to be able to bootstrap, iiuc)09:49
benbrown_gtristan: timeout, given my build was in a fork with different project settings09:49
ironfootit's not a definitions issue if they were designed to run in a controled host environment09:49
benbrown_that would likely have passed09:49
noisecellironfoot, I got your point09:50
paulsher1oodironfoot: if they were designed for a controlled host environment, we wouldn't need a three-stage-bootstrap?09:50
gtristanpaulsher1ood, we would still want one I believe09:50
gtristanmaybe 2 stages would be enough09:50
gtristanthe third stage is mostly for verification anyway09:51
paulsher1oodmaybe my memory is fualty, but at the time, i understood the design of the bootstrapping of build-essential to be expressly about achieving isolation from host tools09:51
ironfootwhich is a different thing09:52
gtristanthere is a comment in build-essential saying that the third gcc is mostly to verify that we can build ourselves (as the first stage will be compiling the cross compiler, and second stage is cross-compiling the native compiler, third is to build native compiler with native compiler)09:53
gtristan(even though there is no real cross, it's done that way for bootstrap purpose)09:53
paulsher1oodis it? all i'm saying is that the 'fix' for this needs to be done in definitions - there isn't a ybd fix for it09:53
gtristanmeh, yocto does compile the base with host tools, and they spend much effort to bless supported distro versions as they roll out09:54
gtristanthis would be the same effort09:54
* paulsher1ood would prefer to minimise the effort09:55
gtristanOk, give me 2 weeks, tops.09:55
noisecellgtristan, why are we talking about the 3rd stage of gcc? when the one which fail are 1st and 2nd in the pipelines we are looking at?09:55
gtristan:)09:55
ironfootgtristan: to finish deployment in BuildStream?09:56
noisecell:)09:56
ironfootgtristan: you crazy, and will regret it09:56
gtristannoisecell, because paulsher1ood said we may not need a third stage if we had controlled host environment, I think it's still desirable, or unnescessary to remove at least09:56
gtristanironfoot, I wont regret it if I _actually get 2 weeks_09:57
paulsher1oodgtristan: i didn't say that. i said that the original design was to handle uncontrolled host environment09:57
gtristanironfoot, we build the whole GNOME stack, and it took me one day to convert another unrelated definitions using project09:57
gtristanpaulsher1ood, ok, I was referring to:09:58
gtristan<paulsher1ood> ironfoot: if they were designed for a controlled host environment, we wouldn't need a three-stage-bootstrap?09:58
paulsher1oodgtristan: has any buildstream output been run|tested ?09:58
paulsher1oodgtristan: i was asking the question :)09:58
gtristanpaulsher1ood, I need to write a deployment09:58
gtristan:)09:58
gtristanthat's basically it09:58
gtristanBesides that, the 2 weeks, is so that we cover the artifact cache sharing09:58
gtristanthen we have core features supported pretty much09:59
ironfootso yes. Ok I agree, in 2 weeks we might have something better, but I'd say paulsher1ood wounld appreciate a solution today09:59
gtristanprobably that fits in less time, but hey leave some buffer in the estimates09:59
gtristansolution for today, build somewhere it works and upload the artifacts ?09:59
gtristanseems the quickest way out10:00
paulsher1oodto be clear i'm happy for buildstream to become the solution here... i  just don't want people staring at broken pipelines for weeks :)10:00
ironfoot(assumes that everyone else is happy to consume those artifacts)10:00
paulsher1oodjjardon: are you around?10:00
gtristanironfoot, that is absolutely no change from yesterday, or pre-artifact-version-bump10:01
gtristanironfoot, that assumption I mean10:01
gtristannot a regression10:01
ironfootright, yeah, you are right10:01
paulsher1oodjjardon: are artifacts only uploaded by master ybd pipelines currently?10:01
* paulsher1ood can run a build with artifact-version bump elsewhere and upload, if that's the consensus 10:03
jjardonLet me check10:03
jjardonYeah, all the artifacts from any branch are uploaded10:05
* jjardon reads context10:06
benbrown_I have https://gitlab.com/baserock/ybd/pipelines/6775393 kicked off already, should do the job10:07
paulsher1oodw00t, then :)10:08
ironfootheh10:08
paulsher1oodgtristan: are you aware of benbrown_'s git-lfs work? presumably buildstream would benefit from equivalent functionality10:10
jjardonbenbrown if you run in a fork, increase the timeout of your builds; it's only 60 min by default, I think10:12
gtristanI was thinking about that yeah10:12
gtristanbenbrown_, can you explain what exactly the thing is ?10:12
benbrown_jjardon: I did, but only to 30010:12
gtristanbenbrown_, I'm curious specifically, if it has to be used with a specific repo, or if it can be all handled client side10:12
jjardonbenbrown is the timeout the only problem?10:12
jjardonYeah 5 hours is probably not enough to build without kbas10:14
ironfootjjardon: <benbrown_> paulsher1ood: a bump in the artifact version is causing builds to fail (note some of the failures in the pipeline are due to timeout): https://gitlab.com/benjamb/ybd/pipelines/676701010:14
ironfootthe problem being that some hosts fail to build definitions from scratch10:15
jjardonOh!10:16
benbrown_gtristan: no special requirements on repos10:16
benbrown_other than what lfs already requires to setup10:17
jjardonYeah new versions of GCC can not build old versions; I think i saw that bug somewhere10:17
benbrown_we just check the .gitattributes for lfs filters and run the appropriate fetch/pull commands10:17
ironfootjjardon: solution proposed: Upload the artifact needed to kbas10:17
* jjardon will disable kbas in those jobs to see where we are at10:18
ironfoot(which is being done atm by the pipeline running in definitions/ybd)10:18
*** ssam2 has joined #baserock10:18
*** ChanServ sets mode: +v ssam210:18
jjardonSure, but the pipeline will still be lying as it seems it supports all those distribution but the only thing is doing is reusing the kbas cache10:19
gtristanbenbrown_, so what does this mean... the .gitattributes in a given checkout might say "this was cloned from a git lfs repo" ?10:23
ironfootjjardon: I agree10:23
gtristanbenbrown_, I dont really understand... if I want to use git lfs can I use it locally on a git repo that doesnt have that ? Or... can I use git without the lfs extension installed when cloning/checking out from a repo that does not have that thing in the .gitattributes ?10:24
benbrown_gtristan: The .gitattributes will specify files that are managed by git lfs, if that doesn't exist, calls to git-lfs are not necessary10:28
gtristanbenbrown_, and if it does exist, calls to git-lfs are a hard requirement ? or not ?10:28
benbrown_regular gits/definitions won't need to have git-lfs installed10:28
benbrown_not a hard requirement10:29
gtristanwill just be sub-optimal10:29
gtristanOk10:29
gtristanbenbrown_, sounds good :)10:29
gtristanthanks !10:29
benbrown_gtristan: there's a pull request in baserock/ybd if you want to check it out10:29
gtristanlinky ?10:30
paulsher1oodhttps://gitlab.com/baserock/ybd/merge_requests/31310:34
benbrown_too fast for me10:34
* paulsher1ood would like to see the pipeline for that succeed10:34
benbrown_gtristan: description needs an update10:34
benbrown_paulsher1ood: working on it :)10:34
paulsher1ood:)10:34
gtristanbenbrown_, any reason why you pass --global to git config at all ?10:36
gtristanhence the need for     os.environ['GIT_CONFIG_NOSYSTEM'] = "1", and even if so, causing pollution of user's global git configuration ?10:36
benbrown_gtristan: `git lfs install` installs to global (user) config, primarily to cover the fact that I wasn't passing --local to install previously10:37
benbrown_but seems like a sane option, given anyone could sudo `git lfs install` and find things not working10:37
* benbrown_ put a backtick in the wrong places10:38
gtristanSo you also make a hard requirement on git lfs being installed10:38
gtristanif you read it in the .gitattributes10:38
gtristanbut you say that's not necessary right ?10:39
gtristanjust slower ?10:39
benbrown_gtristan: Well, it's a requirement if you actually have a repository in your definitions that is managed by git lfs10:40
paulsher1oodso iiuc, try to do the right thing if we encounter a repo containing the magic, error out if it fails (eg because git lfs not installed)10:42
benbrown_yh10:42
benbrown_but yes, given it reads .gitattributes it will be a little slower than not checking10:43
gtristanbenbrown_, that's what I was asking10:46
gtristanbenbrown_, so I _cannot_ use git without lfs for a remote repo that uses git lfs10:46
gtristanwith the expense of caching large files without extra smartness locally10:47
benbrown_right, but this check is on checkout10:47
gtristanyes I'm wondering how to handle it in buildstream, where we do preflight checks for host tooling10:47
gtristanbut this cannot be a preflight check, because you need to access network or have the git mirror already cached10:47
gtristanso has to be a delayed error10:48
benbrown_gtristan: there is currently no caching of those binaries though as a result, git-lfs requires/expects a remote server10:48
* gtristan was hopeful to be able to avoid that, and just add a warning that the user should install git lfs to speed things up10:48
benbrown_and errors out when fetching from a local path10:48
benbrown_so cannot be used with the current git caching mechanism, without some hacking10:48
gtristanewww10:49
benbrown_unless we somehow have ybd serve gits to itself with some magic10:49
gtristanso you mean, you only get a partial clone ?10:49
benbrown_or patch git-lfs10:49
gtristaneven with git clone --mirror ?10:49
benbrown_git clone --mirror does no fetching of lfs binaries10:49
gtristan:-/10:49
gtristanBut then, there is a way to explicitly fetch the ones you need for a given commit into your mirror ?10:50
benbrown_yes10:50
benbrown_which is what the series currently does10:50
gtristanSo at least you dont need network at checkout time10:50
benbrown_oh, into your mirror10:50
gtristanYes.10:50
benbrown_that's the thing, how do I fetch from that mirror?10:50
benbrown_given lfs expects an remote path10:51
gtristanI want to guarantee that after running `bst fetch`, I dont need network to build10:51
gtristanbenbrown_, and that remote path is in .gitattributes correct ?10:51
benbrown_gtristan: no, it get's it from remote.<remote>.url10:51
gtristanbenbrown_, can be done with git show rev:file or smth similar to parse that file for a given sha without doing any checkout10:52
gtristanOk anyway, I'll have to figure on this, want to be sure I can cache, only the binaries I need for a given revision in the mirror, and then no network at checkout time10:52
benbrown_(remote.<remote.url being in the gitdir .git/config)10:54
*** ctbruce has joined #baserock11:04
jjardonpaulsher1ood: benbrown_ https://gitlab.com/baserock/ybd/merge_requests/31611:43
paulsher1oodapproved11:46
noisecelljjardon, why not to add gcc-4.9 to these runners?11:47
ironfootnot to the runners, to the docker images used, iirc11:48
ironfootthese images are the latest published in docker hub, and represent the latest versions of some distros11:49
ironfootwe could ofcourse require the installation of an older version of gcc-4.9 in the install deps script11:49
ironfoots/of gcc-4.9/of gcc (like 4.9)/11:50
jjardonbecause is not what they ship by default, and actually not sure those distros will even have those versions in their repos11:50
jjardondebian strecht (testing) doesnt have 4.9, for example11:51
noisecelljjardon, I was checking that, ubuntu 16.10 does have it http://packages.ubuntu.com/search?keywords=gcc-4.911:51
noisecelljjardon, and Im sure i've installed 4.x version in stretch in the past11:52
jjardonhttps://packages.debian.org/search?keywords=gcc-4.911:52
noisecelljjardon, https://packages.debian.org/stretch/gcc11:52
noisecellbad link sorry11:53
jjardonpaulsher1ood: cheers11:53
jjardoneven if we can force people to install old gcc versions, I think the proper solution for is https://gitlab.com/baserock/definitions/issues/811:57
ironfootis that the real fix for ybd's CI?11:59
noisecelljjardon, ok, it was more curiosity and trying to save 2 of your tests11:59
* ironfoot hides11:59
jjardonironfoot: yes, because is not a problem in ybd, Its a problem in definitions12:00
noisecelljjardon, confirmed, I can not find a 4.x gcc package in debian stretch (I think I did force it and then upgrade to 6.x after a period of time)12:05
noisecelljjardon, sorry about the noise12:05
jjardonnoisecell: np :)12:41
*** noisecell has quit IRC12:52
*** noisecell has joined #baserock12:53
*** jude_ has quit IRC14:18
*** cosm has quit IRC14:20
*** cosm has joined #baserock14:23
*** jude_ has joined #baserock14:26
jjardonHi, is there any env variable to indicate the src folder when writing .morph files?14:48
ssam2I think you have to work it out based on chunk name14:49
ssam2not sure if that's even provided anywhere in the environment14:49
* ironfoot wonders what's the usecase14:49
ironfootwith src folder you mean the folder with the source code taken from git? or the classic /src (or similar) folder used to store all things14:51
*** gtristan has quit IRC15:00
*** jude_ has quit IRC15:14
*** gtristan has joined #baserock15:25
*** jude_ has joined #baserock15:29
*** jonathanmaw has quit IRC16:25
jjardontaken from git16:43
jjardonfor some modules I have to 'cd' to folders so Id like to com back again to the 'default' folder16:44
ssam2can you use `cd -` ?16:45
ironfootevery new command (new element in list of commands) will start in the source folder16:46
ironfootso, if you can do that, you won't have to cd to this folder16:47
ironfoot- |16:48
ironfoot  cd foo16:48
ironfoot  make16:48
ironfoot- cmd from default folder16:48
*** noisecell has quit IRC16:58
gtristanironfoot, thats a bit impractical when you just spent ~20 lines of export FOO, BAR, BAZ, EVERY_ENV_VAR_UNDER_THE_SUN :)17:12
gtristanwhich, I suspect is found in the same morph hehe17:12
SotKadd an "export SOURCE_DIR=`pwd`" to that pile of exports before the cd?17:14
* gtristan takes that approach usually yes17:17
*** ctbruce has quit IRC17:19
*** ssam2 has quit IRC18:09
*** jude_ has quit IRC18:35
jjardonthanks everyone19:14
*** jude_ has joined #baserock19:19
*** rdale has quit IRC19:56
*** jude_ has quit IRC21:15
*** gtristan has quit IRC21:21
*** jude_ has joined #baserock21:22
*** jude_ has quit IRC21:26
*** jjardon_ has joined #baserock23:10

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!