*** johanhelsing has quit IRC | 03:11 | |
*** johanhelsing has joined #automotive | 03:17 | |
*** Tarnyko2 has joined #automotive | 05:08 | |
*** Tarnyko has quit IRC | 05:09 | |
*** Figure has quit IRC | 06:01 | |
*** Figure has joined #automotive | 06:03 | |
*** Tarnyko2 has quit IRC | 06:37 | |
*** jobol has joined #automotive | 07:16 | |
*** rajm has joined #automotive | 07:16 | |
*** leon has joined #automotive | 07:28 | |
*** leon is now known as Guest43614 | 07:28 | |
*** ctbruce has joined #automotive | 07:32 | |
*** jonathanmaw has joined #automotive | 07:32 | |
*** Guest43614 is now known as leon-anavi | 07:33 | |
leon-anavi | morning | 07:33 |
---|---|---|
*** yannick_ has joined #automotive | 07:37 | |
*** yannick_ is now known as Guest54702 | 07:37 | |
*** fredcadete has joined #automotive | 07:39 | |
*** jonathanmaw has quit IRC | 07:40 | |
fredcadete | good morning | 07:45 |
*** jonathanmaw has joined #automotive | 07:49 | |
*** Guest54702 is now known as yannick__ | 07:50 | |
*** toscalix has joined #automotive | 08:00 | |
*** CTtpollard has joined #automotive | 08:07 | |
CTtpollard | morning | 08:24 |
*** ashwasimha_ has joined #automotive | 08:39 | |
CTtpollard | leon-anavi: looks like the gdp go.cd is still having issues with rust | 08:54 |
leon-anavi | hi CTtpollard, so that issue affects all the pipelines? | 08:54 |
CTtpollard | leon-anavi: yep, failed to fetch the rustc url | 08:55 |
leon-anavi | hm... :( I have to check this | 08:56 |
leon-anavi | clearing some space to start a build on my desktop. | 08:57 |
CTtpollard | leon-anavi: cool, might as well just tell bitbake to go straight to rust | 09:07 |
CTtpollard | I could 'hotfix' it by dropping sota-client for now | 09:21 |
leon-anavi | I am directly going for "bitbake rvi-sota-client" | 09:29 |
CTtpollard | kk | 09:30 |
CTtpollard | leon-anavi: I created a ticket for it https://at.projects.genivi.org/jira/browse/GDP-249 | 10:02 |
*** DSVenky has quit IRC | 10:22 | |
*** gunnarx has joined #automotive | 10:42 | |
*** gunnarx has joined #automotive | 10:42 | |
gunnarx | CTtpollard, it looks like the standard GDP pipelines are not converted to single-branch yet | 10:43 |
gunnarx | pedroalvarez, ^^ | 10:44 |
CTtpollard | not fully no | 10:44 |
gunnarx | Was I supposed to do it? :) | 10:44 |
CTtpollard | I think our point of view was to see how the new pipelines were fairing, before touching the 'standard' ones, switching to daily etc | 10:45 |
gunnarx | OK, but your latest merge (FSA) fails on the PR builds (because the latest PR is a fail-test) but the standard pipelines are not being rebuilt. | 10:45 |
gunnarx | Right, we spoke of setting them on a timer... | 10:46 |
gunnarx | I suppose we might as well do that. | 10:46 |
CTtpollard | I've setup a ticket for the PR's failing due to rust. leon-anavi is trying locally | 10:47 |
leon-anavi | CTtpollard, rvi sota client builds on my side | 10:47 |
gunnarx | Yep. But I saw also the PR test pipelines are rebuilt when you merge new stuff to master | 10:47 |
gunnarx | The latest PR is my own deliberate FAIL test. | 10:48 |
gunnarx | There are 4 active GDP targets on the new one right? No Koelsch/Silk on the new setup yet? | 10:48 |
leon-anavi | CTtpollard, I shared the log in JIRA | 10:48 |
CTtpollard | the exception being thrown by rustc is here https://github.com/rust-lang/rust/blob/1.7.0/src/etc/snapshot.py#L148 | 10:48 |
leon-anavi | I built rvi-sota-client for rpi2 | 10:48 |
CTtpollard | ty leon-anavi | 10:48 |
CTtpollard | gunnarx: we discussed that it might be viable to still build new commits even if the PR passed (i.e the situation where another merge has happened in the meantime) but those specific pipelines triggering is probably not right way to handle it | 10:50 |
CTtpollard | if possible, those should PR trigger only | 10:50 |
gunnarx | Yeah, we could try avoiding that. But I think the plugin builds standard master in order to have a reference, at least the very first time you set it up you need to. | 10:52 |
gunnarx | Not 100% sure how to right now. | 10:53 |
gunnarx | Anyhow, I can fix the standard pipelines of Minnowboard,QEMU, RPI2. They are fetching from GitHub but not modified for single-branch. So in their current state they are unusable. | 10:54 |
gunnarx | I can pause them in the meantime. | 10:54 |
CTtpollard | thanks | 10:57 |
CTtpollard | does go.cd not have some foo to stay 'only trigger for PR' > | 10:57 |
CTtpollard | ? | 10:57 |
gunnarx | I don't have too much time but I want to make a quick fix. | 10:58 |
gunnarx | I don't know, possibly not. It's a plugin doing the PR stuff and under development... | 10:58 |
gunnarx | For normal pipelines you just set it to not trigger on git change, and trigger it manually or timer. For the PR plugin IDK | 10:59 |
gunnarx | Sorry, some more sync up needed. | 10:59 |
CTtpollard | yup | 10:59 |
gunnarx | The usage of $MACHINE is basically the choice of bsp? | 10:59 |
CTtpollard | yes | 10:59 |
gunnarx | so it's the board configuration... | 10:59 |
gunnarx | so MACHINE will be set to silk and koelsch eventually? | 10:59 |
gunnarx | b/c there's some older pipeline using $BOARD for this. | 11:00 |
CTtpollard | trying to replicate the rust failure locally on a minnow env, setting the same threads/parallel as the agent is using to see if that's causing an issue at all | 11:00 |
CTtpollard | gunnarx: I'd like it for every target, but I think we've only really got bandwith for 1 board per bsp for now | 11:00 |
CTtpollard | same for raspi3 | 11:01 |
gunnarx | I'm talking about organization, not b/w | 11:01 |
gunnarx | As I said, I'm happy to pause the pipelines temporarily | 11:01 |
gunnarx | oh developer bandwidth? | 11:01 |
CTtpollard | then sure, pipeline for every target eventually | 11:01 |
CTtpollard | no, hardware resources | 11:01 |
gunnarx | Anyway, I'm just setting up the structure regardless. | 11:01 |
gunnarx | OK but for the hardware, we just keep some of them paused then | 11:02 |
CTtpollard | There's lot of pipedreams (pardon the pun) for what we can achieve, just wanted to create a workable system that was achievable / realistic for the first stage | 11:03 |
gunnarx | Also, sorry to pile it on, but some of our pipelines use $MACHINE (env vars) and some #{MACHINE} (parameters). This is because of multiple masters here and I'm the guilty party... But it's not urgent. We can align that detail later. | 11:03 |
gunnarx | :) | 11:03 |
gunnarx | NP, I'll just update the ones that are obviously wrong at this point (no single-branch support) for starters. | 11:04 |
CTtpollard | the two new pipeline types we've introduced are highly inefficient, but it's a base to build upon | 11:04 |
CTtpollard | gunnarx: thanks :) | 11:05 |
gunnarx | As of now master is broken with the Rust problem right, except for QEMU target? (If so I won't trigger any of the pipelines right now) | 11:09 |
CTtpollard | gunnarx: apparently so, as of yet unreplicated outside of the pipeline | 11:11 |
CTtpollard | if I my current test locally cannot achieve the failure, I plan to build it directly on the agent that is failing | 11:12 |
gunnarx | ah, OK | 11:12 |
CTtpollard | and handle the gdp10 integration work... | 11:13 |
gunnarx | Yes, I'm not trying to reprioritize your time - just some sync | 11:17 |
leon-anavi | CTtpollard, how is you current local test going on? ping me when you have some results | 11:18 |
CTtpollard | leon-anavi: still not completer as of yet | 11:19 |
leon-anavi | ok | 11:20 |
leon-anavi | I have to dig into another task. Pls ping me when you have results to investigate them together. | 11:20 |
CTtpollard | cool :) | 11:21 |
leon-anavi | CTtpollard, is there some kind of OE mirror for GDP? | 11:41 |
CTtpollard | not as of yet, we've talked about pre-mirroring for the go.cd pipelines, not sure about public usage though | 11:42 |
CTtpollard | probably a question for the tools team | 11:43 |
leon-anavi | ok | 11:44 |
gunnarx | leon-anavi, are you looking for better availability (sites going down), or build efficiency, or ... | 11:44 |
leon-anavi | gunnarx, just brainstorming what might have caused the issue while the pipelines were fetching rust. | 11:45 |
gunnarx | ok | 11:45 |
leon-anavi | Because 30 min ago I managed to build the recipe for rvi sota client and its dependencies without any issues on my side. | 11:45 |
gunnarx | for which target? | 11:45 |
CTtpollard | 502/845 here locally | 11:45 |
leon-anavi | So Tom and I are still wondering what went wrong for the build from the pipelines | 11:46 |
leon-anavi | gunnarx, I built it on my side for raspberry pi 2 | 11:46 |
gunnarx | ok | 11:46 |
gunnarx | I thought it looked like it was failing for all but QEMU, but I didn't look very deeply | 11:46 |
CTtpollard | the initial outcome would be that, although the qemu pipeline was the first to build, so it might just be coincidence | 11:47 |
leon-anavi | CTtpollard, it is the same source code for all targets so fetching shouldn't be specific for the machine, right? | 11:48 |
CTtpollard | leon-anavi: yep, and it's not probable that it's just an arm issue, and minnow x86 also failed | 11:48 |
leon-anavi | ok | 11:50 |
CTtpollard | my build VM is on the same physical machine as the go agent, but there might be some network differences as public/private etc | 11:51 |
pedroalvarez | this is the result of having build systems downloading things from the webs | 11:53 |
pedroalvarez | someone should tell them off | 11:54 |
gunnarx | as opposed to downloading it from a "trusted site" :) | 11:55 |
gunnarx | thing is pedroalvarez, first of all no one ultimately trusts anything other than their own local mirror | 11:55 |
gunnarx | And the GENIVI git (hosted at LF) was unreliable too... | 11:56 |
gunnarx | so I don't think there's a really obvious solution, except probably to make it really really simple to keep your own local mirror? | 11:56 |
gunnarx | But I'm eager to hear what the rust failure turns out to be. If it's an unreliable upstream site, then sure, having more local mirrors that "we" control can help the situation. For any production level work, you want an even more local mirrors... | 11:58 |
leon-anavi | yes, sure, we are all interested to find out what why it failed. | 12:02 |
gunnarx | pedroalvarez, I've been cleaning a little in the Go configs. Trying to reduce the number of templates. | 12:06 |
pedroalvarez | cool | 12:07 |
gunnarx | So no special one for Renesas needed with your _generic one. | 12:07 |
*** praneeth has quit IRC | 12:07 | |
*** praneeth has joined #automotive | 12:07 | |
gunnarx | But it has encoded porter now, so silk/koelsch need to be added whenever those are setup again in the new setup | 12:08 |
gunnarx | I copied the generic one into my "with SDK" variant, so they are the same except for adding Yocto SDK | 12:08 |
pedroalvarez | yes, I was aware of that, and was planning to fix it once added support for others | 12:08 |
gunnarx | sure, I know you are aware. Just summarizing | 12:09 |
gunnarx | We agreed to switch /srv/go/dl to /var/cache/yocto/downloads as standard? | 12:09 |
gunnarx | Oh and sstate too | 12:09 |
gunnarx | I'm also wondering about the threads/parallel make stuff. Do you want/need this for CT agent? The default seems pretty good I think. | 12:10 |
gunnarx | I've kept a little experiment with local PREMIRRORS, but you prefer DL_DIR as the mechanism for avoiding download then? | 12:12 |
gunnarx | We can go for that | 12:12 |
CTtpollard | gunnarx: iirc we dropped it down to 4 manually because of a problematic race condition that was happening with 8 | 12:12 |
gunnarx | Ah, OK | 12:12 |
CTtpollard | can't put my finger on it though | 12:13 |
gunnarx | Alright, let's keep it across the board then, for now | 12:13 |
pedroalvarez | I only played with DL_DIR, if PREMIRRORS is better we should use that | 12:13 |
gunnarx | I'm not sure if it makes any difference at all. I had a vague feeling of it being safer. The shared dir won't have the ".done" files and such state that might somehow interfere with the state of other builds, I imagined | 12:14 |
gunnarx | But it's far-fetched, just some gut feeling. DL_DIR is more efficient as there won't be any local copies made at all. | 12:14 |
pedroalvarez | right, then green light to move to /var/cache/yocto/downloads | 12:16 |
gunnarx | I suppose sharing sstate is surely a lot more dangerous in comparison. I really don't know if it is stable when building multiple different architectures. | 12:16 |
pedroalvarez | there is only one way to figure out | 12:16 |
gunnarx | so we go with DL_DIR or PREMIRRORS? | 12:18 |
gunnarx | Can you manage the additional disk space.... I guess it's 2-3 GB per pipeline or so | 12:18 |
gunnarx | OK, let's try DL_DIR then | 12:19 |
gunnarx | pedroalvarez, since you green-lit it, I have changed everywhere to /var/cache/yocto/downloads :) Can you make sure your agent has the path set up and it is writable by go user? | 12:22 |
pedroalvarez | yup, did it just before saying we could change :) | 12:25 |
gunnarx | you are reliable as a rock | 12:27 |
pedroalvarez | ;) | 12:28 |
pedroalvarez | I haven't done anything with sstate yet | 12:28 |
gunnarx | So I'm editing a JIRA issue and wondering why the browser is freezing up.... Oh. Apparently I CTRL-V the entire Go configuration XML file into the comment - that's why.... :-P | 12:30 |
*** Tarnyko has joined #automotive | 12:53 | |
gunnarx | pedroalvarez, what do you mean done anything. The pipeline sets SSTATE it seems, so I assume it means the different pipeline builds are sharing sstate? | 13:16 |
gunnarx | Speaking of which, could that be causing CTtpollard 's current build issues? | 13:17 |
pedroalvarez | all pipelines are sharing the sstate, located in /srv/go/sstate (IIRC) | 13:18 |
pedroalvarez | I thought we were trying to align, so I was expecting a move to /var/cache/yocto/sstate | 13:18 |
pedroalvarez | or similar | 13:18 |
gunnarx | Oh now I see what you mean | 13:18 |
gunnarx | Yes, that would be logical, yes. | 13:18 |
CTtpollard | maybe they're trying unpack what the qemu pipeline used an failing in that way | 13:19 |
gunnarx | Shall I change that too? | 13:19 |
gunnarx | CTtpollard, yes, sstate cache seems a suspect for this type of error right? | 13:19 |
pedroalvarez | gunnarx: If you are ok with that path, then yes | 13:19 |
gunnarx | Yes I'm OK with it | 13:19 |
CTtpollard | gunnarx: it looks like it's trying to fetch a remote snapshot, and a CA cert problem is tripping it | 13:20 |
gunnarx | OK changed to /var/cache/yocto/sstate | 13:20 |
gunnarx | Well we should trigger a pipeline in the meantime I guess, see if problem fixes itself :) | 13:21 |
*** ashwasimha_ has quit IRC | 13:21 | |
pedroalvarez | +1 | 13:22 |
gunnarx | pedroalvarez, can you look at https://go.genivi.org/go/tab/build/detail/GDP-Yocto-RaspberryPi2/19/build/1/init_and_bitbake | 13:23 |
pedroalvarez | gunnarx: "fatal: Remote branch raspberrypi2 not found in upstream origin" | 13:24 |
gunnarx | ah ok | 13:24 |
gunnarx | my bad then. hang on | 13:24 |
gunnarx | Sorry, thought it was something with your local agent. All of them now set to the master branch | 13:26 |
CTtpollard | I'm sharing sstate cache locally across targets, no issue as of yet sharing rust | 13:28 |
gunnarx | CTtpollard, sure but the agent sstate might be corrupt somehow | 13:29 |
CTtpollard | gahh, going to have to reboot | 13:36 |
*** CTtpollard has quit IRC | 13:37 | |
*** CTtpollard has joined #automotive | 13:38 | |
gunnarx | Welcome back. At least you reboot in under one minute... | 13:39 |
CTtpollard | and that's without an ssd! | 13:39 |
gunnarx | I feel sorry for you now. | 13:41 |
CTtpollard | 'mmc0: error -110 whilst initialising SD card' | 13:41 |
CTtpollard | ty monday | 13:41 |
gunnarx | oh no | 13:42 |
fredcadete | shared sstate killed your ssd! | 13:42 |
gunnarx | sd | 13:42 |
fredcadete | even so | 13:42 |
fredcadete | the murderer | 13:43 |
gunnarx | OK some bugs after I hacked around, was missing some MACHINE environment variables. But now the build runs. Lots of setscene going on, which is nice to see | 13:44 |
gunnarx | Lots of setscene | 13:47 |
gunnarx | oops | 13:47 |
gunnarx | repeat | 13:47 |
*** CTtpollard has quit IRC | 13:47 | |
*** CTtpollard has joined #automotive | 13:49 | |
*** CTtpollard has quit IRC | 13:52 | |
*** CTtpollard has joined #automotive | 13:56 | |
pedroalvarez | gunnarx: can we kill ci_test now :D | 14:05 |
pedroalvarez | ? | 14:05 |
gunnarx | ? | 14:05 |
gunnarx | you mean all of the bizarre failures currently happening? :) | 14:06 |
pedroalvarez | nope, the resource needed in some pipelines | 14:06 |
pedroalvarez | ct_test, I mean | 14:06 |
pedroalvarez | yocto_build assumes /var/cache/yocto/* ? | 14:07 |
CTtpollard | I've asked in #yocto is you can specify specific packages to ignore sstate cache | 14:09 |
CTtpollard | * if you | 14:09 |
gunnarx | oh ct_test! big difference :) | 14:12 |
gunnarx | yes I think you can pretty much nuke it. I am using yocto_build as the preferred resource for the bigger agents that can handle it | 14:12 |
fredcadete | incidentally, I was having a look at the Renesas packages that fail when sharing sstate. For vspmif I have found the issue and I'm trying a fix. If we are lucky it's the same issue in the othe packages | 14:17 |
fredcadete | CTtpollard: for which package were you trying to avoid sstate specifically? | 14:17 |
*** toscalix has quit IRC | 14:17 | |
CTtpollard | fredcadete: it's stemming from rust, specifically rustc by the looks of it | 14:18 |
fredcadete | oops, that one I don't have on my setup | 14:18 |
CTtpollard | gunnarx: if it fails again, I propose nuking the rust work in sstate | 14:20 |
CTtpollard | hopefully it's just a corruption, I can't replicate even on the agent | 14:20 |
pedroalvarez | CTtpollard: could you replicate in the agent by reusing the sstate cache? | 14:21 |
gunnarx | OK. I propose even nuking all of sstate if that's the thing. Make a new fully clean build. | 14:21 |
CTtpollard | pedroalvarez: not tried that yet, I'll start one now | 14:21 |
CTtpollard | that narrows out a network issue though | 14:21 |
gunnarx | I didn't understand a word of the answer you got in #yocto :) | 14:22 |
fredcadete | btibake rustc -c cleansstate may be faster to try than nuking the whole sstate | 14:23 |
fredcadete | and gunnarx, me neither... | 14:23 |
gunnarx | oh good :) | 14:23 |
leon-anavi | CTtpollard, most probably you have experienced a temporary network issues. | 14:23 |
CTtpollard | monday has been more testing than usual | 14:37 |
* CTtpollard is glad it's Manchester Beer Week | 14:37 | |
*** gan has joined #automotive | 14:39 | |
*** gunnarx has quit IRC | 14:41 | |
gan | CTtpollard, we've been changing stuff around in the pipelines a bit, and also having issues with the shared downloads and cache on the other agent. But this is on CT agent, I have no real explanation. Maybe you can investigate https://go.genivi.org/go/tab/build/detail/GDP-Yocto-Minnowboard/30/build/1/init_and_bitbake | 14:42 |
gan | A little suspicious failure on navigation service when FSA was just merged? | 14:42 |
*** gan is now known as gunnarx | 14:43 | |
*** gunnarx has left #automotive | 14:49 | |
leon-anavi | This is the first time I hear about "Manchester Beer Week" but it sounds fantastic :) | 14:54 |
*** toscalix has joined #automotive | 14:58 | |
CTtpollard | gan: would look that way | 14:59 |
CTtpollard | the error does not look like it would be arch specific though | 15:01 |
*** jlrmagnus has joined #automotive | 15:01 | |
*** mvick has joined #automotive | 15:06 | |
*** jlrmagnus has quit IRC | 15:14 | |
CTtpollard | ok finally | 15:16 |
pedroalvarez | reproduced? | 15:17 |
CTtpollard | using the same sstate, user, etc on the agent. same failure when trying to compile rvi-sota-client | 15:17 |
CTtpollard | so I'd say clean it using bitbake cleansstate, or nuke the whole contents | 15:18 |
CTtpollard | and see if it happens again after it has populated it once for a target | 15:19 |
CTtpollard | I'll go with the first option and try again | 15:21 |
pedroalvarez | yup | 15:21 |
CTtpollard | actually, I'll do it for rust in general | 15:21 |
leon-anavi | CTtpollard, do you have this error when you build it from scratch? | 15:28 |
CTtpollard | trying to confirm that now, but last time no | 15:29 |
CTtpollard | cleansstate on rust & rust-cross-x86_64 both made no difference in the shared sstate | 15:30 |
*** jlrmagnus has joined #automotive | 15:31 | |
pedroalvarez | time to nuke it all? | 15:33 |
pedroalvarez | or.. maybe is just the website not allowing too many downloads at the same time? | 15:33 |
CTtpollard | I'm trying one last thing | 15:33 |
kooltux | dl9pf_, ping | 15:34 |
CTtpollard | it would be interesting to have a clean sstate, build a non qemu PR pipeline first, and then see if it happens to others after | 15:34 |
*** jobol has quit IRC | 15:38 | |
*** rajm has quit IRC | 15:42 | |
*** jonathanmaw has quit IRC | 15:45 | |
pedroalvarez | CTtpollard: let's do that? | 15:46 |
CTtpollard | hang fire! | 15:46 |
CTtpollard | :) | 15:46 |
pedroalvarez | right, I'll let you do it :) | 15:47 |
CTtpollard | I probably won't be able to do it until the morning | 15:49 |
*** fredcadete has quit IRC | 15:57 | |
*** nisha has quit IRC | 15:59 | |
*** ctbruce has quit IRC | 16:24 | |
leon-anavi | CTtpollard, regarding these circumstances what might have caused the issue? | 16:27 |
toscalix | leon-anavi: CTtpollard went home, I think | 16:31 |
leon-anavi | ok | 16:34 |
leon-anavi | I will catch him tomorrow :) | 16:34 |
leon-anavi | sorry that I cannot help much but I was unable to reproduce the issue | 16:34 |
*** AlisonChaiken has quit IRC | 16:35 | |
*** Saint_Isidore has joined #automotive | 16:36 | |
*** AlisonChaiken has joined #automotive | 16:53 | |
toscalix | leon-anavi: np We will put more effort tomorrow | 16:59 |
leon-anavi | ok, I will be around in IRC to support you with what I can | 16:59 |
*** leon-anavi has quit IRC | 17:25 | |
*** Tarnyko has quit IRC | 17:53 | |
*** Tarnyko has joined #automotive | 18:24 | |
*** jlrmagnus has quit IRC | 18:47 | |
*** jlrmagnus has joined #automotive | 18:48 | |
*** toscalix has quit IRC | 19:21 | |
*** nisha has joined #automotive | 19:26 | |
*** kooltux_ has joined #automotive | 20:03 | |
*** kooltux_ has quit IRC | 20:11 | |
*** kooltux_ has joined #automotive | 20:26 | |
*** kooltux_ has quit IRC | 20:31 | |
*** jlrmagnus has quit IRC | 21:13 | |
*** jlrmagnus has joined #automotive | 21:37 | |
*** jlrmagnus has quit IRC | 22:12 | |
*** jlrmagnus has joined #automotive | 22:32 | |
*** Tarnyko has quit IRC | 22:44 | |
*** Tarnyko has joined #automotive | 22:47 | |
*** jlrmagnus has quit IRC | 23:28 | |
*** kooltux_ has joined #automotive | 23:30 | |
*** kooltux_ has quit IRC | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!