IRC logs for #automotive for Monday, 2016-06-13

*** johanhelsing has quit IRC03:11
*** johanhelsing has joined #automotive03:17
*** Tarnyko2 has joined #automotive05:08
*** Tarnyko has quit IRC05:09
*** Figure has quit IRC06:01
*** Figure has joined #automotive06:03
*** Tarnyko2 has quit IRC06:37
*** jobol has joined #automotive07:16
*** rajm has joined #automotive07:16
*** leon has joined #automotive07:28
*** leon is now known as Guest4361407:28
*** ctbruce has joined #automotive07:32
*** jonathanmaw has joined #automotive07:32
*** Guest43614 is now known as leon-anavi07:33
leon-anavimorning07:33
*** yannick_ has joined #automotive07:37
*** yannick_ is now known as Guest5470207:37
*** fredcadete has joined #automotive07:39
*** jonathanmaw has quit IRC07:40
fredcadetegood morning07:45
*** jonathanmaw has joined #automotive07:49
*** Guest54702 is now known as yannick__07:50
*** toscalix has joined #automotive08:00
*** CTtpollard has joined #automotive08:07
CTtpollardmorning08:24
*** ashwasimha_ has joined #automotive08:39
CTtpollardleon-anavi: looks like the gdp go.cd is still having issues with rust08:54
leon-anavihi CTtpollard, so that issue affects all the pipelines?08:54
CTtpollardleon-anavi: yep, failed to fetch the rustc url08:55
leon-anavihm... :( I have to check this08:56
leon-anaviclearing some space to start a build on my desktop.08:57
CTtpollardleon-anavi: cool, might as well just tell bitbake to go straight to rust09:07
CTtpollardI could 'hotfix' it by dropping sota-client for now09:21
leon-anaviI am directly going for "bitbake rvi-sota-client"09:29
CTtpollardkk09:30
CTtpollardleon-anavi: I created a ticket for it https://at.projects.genivi.org/jira/browse/GDP-24910:02
*** DSVenky has quit IRC10:22
*** gunnarx has joined #automotive10:42
*** gunnarx has joined #automotive10:42
gunnarxCTtpollard, it looks like the standard GDP pipelines are not converted to single-branch yet10:43
gunnarxpedroalvarez, ^^10:44
CTtpollardnot fully no10:44
gunnarxWas I supposed to do it?  :)10:44
CTtpollardI think our point of view was to see how the new pipelines were fairing, before touching the 'standard' ones, switching to daily etc10:45
gunnarxOK, but your latest merge (FSA) fails on the PR builds (because the latest PR is a fail-test) but the standard pipelines are not being rebuilt.10:45
gunnarxRight, we spoke of setting them on a timer...10:46
gunnarxI suppose we might as well do that.10:46
CTtpollard I've setup a ticket for the PR's failing due to rust. leon-anavi is trying locally10:47
leon-anaviCTtpollard, rvi sota client builds on my side10:47
gunnarxYep.  But I saw also the PR test pipelines are rebuilt when you merge new stuff to master10:47
gunnarxThe latest PR is my own deliberate FAIL test.10:48
gunnarxThere are 4 active GDP targets on the new one right?  No Koelsch/Silk on the new setup yet?10:48
leon-anaviCTtpollard, I shared the log in JIRA10:48
CTtpollardthe exception being thrown by rustc is here https://github.com/rust-lang/rust/blob/1.7.0/src/etc/snapshot.py#L14810:48
leon-anaviI built rvi-sota-client for rpi210:48
CTtpollardty leon-anavi10:48
CTtpollardgunnarx: we discussed that it might be viable to still build new commits even if the PR passed (i.e the situation where another merge has happened in the meantime) but those specific pipelines triggering is probably not right way to handle it10:50
CTtpollardif possible, those should PR trigger only10:50
gunnarxYeah, we could try avoiding that.   But I think the plugin builds standard master in order to have a reference, at least the very first time you set it up you need to.10:52
gunnarxNot 100% sure how to right now.10:53
gunnarxAnyhow, I can fix the standard pipelines of Minnowboard,QEMU, RPI2.  They are fetching from GitHub but not modified for single-branch.  So in their current state they are unusable.10:54
gunnarxI can pause them in the meantime.10:54
CTtpollardthanks10:57
CTtpollarddoes go.cd not have some foo to stay 'only trigger for PR' >10:57
CTtpollard?10:57
gunnarxI don't have too much time but I want to make a quick fix.10:58
gunnarxI don't know, possibly not.  It's a plugin doing the PR stuff and under development...10:58
gunnarxFor normal pipelines you just set it to not trigger on git change, and trigger it manually or timer.  For the PR plugin IDK10:59
gunnarxSorry, some more sync up needed.10:59
CTtpollardyup10:59
gunnarxThe usage of $MACHINE is basically the choice of bsp?10:59
CTtpollardyes10:59
gunnarxso it's the board configuration...10:59
gunnarxso MACHINE will be set to silk and koelsch eventually?10:59
gunnarxb/c there's some older pipeline using $BOARD for this.11:00
CTtpollard trying to replicate the rust failure locally on a minnow env, setting the same threads/parallel as the agent is using to see if that's causing an issue at all11:00
CTtpollardgunnarx: I'd like it for every target, but I think we've only really got bandwith for 1 board per bsp for now11:00
CTtpollardsame for raspi311:01
gunnarxI'm talking about organization, not b/w11:01
gunnarxAs I said, I'm happy to pause the pipelines temporarily11:01
gunnarxoh developer bandwidth?11:01
CTtpollardthen sure, pipeline for every target eventually11:01
CTtpollardno, hardware resources11:01
gunnarxAnyway, I'm just setting up the structure regardless.11:01
gunnarxOK but for the hardware, we just keep some of them paused then11:02
CTtpollardThere's lot of pipedreams (pardon the pun) for what we can achieve, just wanted to create a workable system that was achievable / realistic for the first stage11:03
gunnarxAlso, sorry to pile it on, but some of our pipelines use $MACHINE (env vars) and some #{MACHINE} (parameters).  This is because of multiple masters here and I'm the guilty party...   But it's not urgent.  We can align that detail later.11:03
gunnarx:)11:03
gunnarxNP, I'll just update the ones that are obviously wrong at this point (no single-branch support) for starters.11:04
CTtpollardthe two new pipeline types we've introduced are highly inefficient, but it's a base to build upon11:04
CTtpollardgunnarx: thanks :)11:05
gunnarxAs of now master is broken with the Rust problem right, except for QEMU target?  (If so I won't trigger any of the pipelines right now)11:09
CTtpollardgunnarx: apparently so, as of yet unreplicated outside of the pipeline11:11
CTtpollardif I my current test locally cannot achieve the failure, I plan to build it directly on the agent that is failing11:12
gunnarxah, OK11:12
CTtpollardand handle the gdp10 integration work...11:13
gunnarxYes, I'm not trying to reprioritize your time - just some sync11:17
leon-anaviCTtpollard, how is you current local test going on? ping me when you have some results11:18
CTtpollardleon-anavi: still not completer as of yet11:19
leon-anaviok11:20
leon-anaviI have to dig into another task. Pls ping me when you have results to investigate them together.11:20
CTtpollardcool :)11:21
leon-anaviCTtpollard, is there some kind of OE mirror for GDP?11:41
CTtpollardnot as of yet, we've talked about pre-mirroring for the go.cd pipelines, not sure about public usage though11:42
CTtpollardprobably a question for the tools team11:43
leon-anaviok11:44
gunnarxleon-anavi, are you looking for better availability (sites going down), or build efficiency, or ...11:44
leon-anavigunnarx, just brainstorming what might have caused the issue while the pipelines were fetching rust.11:45
gunnarxok11:45
leon-anaviBecause 30 min ago I managed to build the recipe for rvi sota client and its dependencies without any issues on my side.11:45
gunnarxfor which target?11:45
CTtpollard502/845 here locally11:45
leon-anaviSo Tom and I are still wondering what went wrong for the build from the pipelines11:46
leon-anavigunnarx, I built it on my side for raspberry pi 211:46
gunnarxok11:46
gunnarxI thought it looked like it was failing for all but QEMU, but I didn't look very deeply11:46
CTtpollardthe initial outcome would be that, although the qemu pipeline was the first to build, so it might just be coincidence11:47
leon-anaviCTtpollard, it is the same source code for all targets so fetching shouldn't be specific for the machine, right?11:48
CTtpollardleon-anavi: yep, and it's not probable that it's just an arm issue, and minnow x86 also failed11:48
leon-anaviok11:50
CTtpollardmy build VM is on the same physical machine as the go agent, but there might be some network differences as public/private etc11:51
pedroalvarezthis is the result of having build systems downloading things from the webs11:53
pedroalvarezsomeone should tell them off11:54
gunnarxas opposed to downloading it from a "trusted site" :)11:55
gunnarxthing is pedroalvarez, first of all no one ultimately trusts anything other than their own local mirror11:55
gunnarxAnd the GENIVI git (hosted at LF) was unreliable too...11:56
gunnarxso I don't think there's a really obvious solution, except probably to make it really really simple to keep your own local mirror?11:56
gunnarxBut I'm eager to hear what the rust failure turns out to be.  If it's an unreliable upstream site, then sure, having more local mirrors that "we" control can help the situation.  For any production level work, you want an even more local mirrors...11:58
leon-anaviyes, sure, we are all interested to find out what why it failed.12:02
gunnarxpedroalvarez, I've been cleaning a little in the Go configs.  Trying to reduce the number of templates.12:06
pedroalvarezcool12:07
gunnarxSo no special one for Renesas needed with your _generic one.12:07
*** praneeth has quit IRC12:07
*** praneeth has joined #automotive12:07
gunnarxBut it has encoded porter now, so silk/koelsch need to be added whenever those are setup again in the new setup12:08
gunnarxI copied the generic one into my "with SDK" variant, so they are the same except for adding Yocto SDK12:08
pedroalvarezyes, I was aware of that, and was planning to fix it once added support for others12:08
gunnarxsure, I know you are aware.  Just summarizing12:09
gunnarxWe agreed to switch /srv/go/dl to /var/cache/yocto/downloads as standard?12:09
gunnarxOh and sstate too12:09
gunnarxI'm also wondering about the threads/parallel make stuff.  Do you want/need this for CT agent?  The default seems pretty good I think.12:10
gunnarxI've kept a little experiment with local PREMIRRORS, but you prefer DL_DIR as the mechanism for avoiding download then?12:12
gunnarxWe can go for that12:12
CTtpollardgunnarx: iirc we dropped it down to 4 manually because of a problematic race condition that was happening with 812:12
gunnarxAh, OK12:12
CTtpollardcan't put my finger on it though12:13
gunnarxAlright, let's keep it across the board then, for now12:13
pedroalvarezI only played with DL_DIR, if PREMIRRORS is better we should use that12:13
gunnarxI'm not sure if it makes any difference at all.  I had a vague feeling of it being safer.  The shared dir won't have the ".done" files and such state that might somehow interfere with the state of other builds, I imagined12:14
gunnarxBut it's far-fetched, just some gut feeling.    DL_DIR is more efficient as there won't be any local copies made at all.12:14
pedroalvarezright, then green light to move to /var/cache/yocto/downloads12:16
gunnarxI suppose sharing sstate is surely a lot more dangerous in comparison.   I really don't know if it is stable when building multiple different architectures.12:16
pedroalvarezthere is only one way to figure out12:16
gunnarxso we go with DL_DIR or PREMIRRORS?12:18
gunnarxCan you manage the additional disk space....  I guess it's 2-3 GB per pipeline or so12:18
gunnarxOK, let's try DL_DIR then12:19
gunnarxpedroalvarez, since you green-lit it, I have changed everywhere to  /var/cache/yocto/downloads :)  Can you make sure your agent has the path set up and it is writable by go user?12:22
pedroalvarezyup, did it just before saying we could change :)12:25
gunnarxyou are reliable as a rock12:27
pedroalvarez;)12:28
pedroalvarezI haven't done anything with sstate yet12:28
gunnarxSo I'm editing a JIRA issue and wondering why the browser is freezing up....  Oh.  Apparently I CTRL-V the entire Go configuration XML file into the comment - that's why.... :-P12:30
*** Tarnyko has joined #automotive12:53
gunnarxpedroalvarez, what do you mean done anything.  The pipeline sets SSTATE it seems, so I assume it means the different pipeline builds are sharing sstate?13:16
gunnarxSpeaking of which, could that be causing CTtpollard 's current build issues?13:17
pedroalvarezall pipelines are sharing the sstate, located in /srv/go/sstate (IIRC)13:18
pedroalvarezI thought we were trying to align, so I was expecting a move to /var/cache/yocto/sstate13:18
pedroalvarezor similar13:18
gunnarxOh now I see what you mean13:18
gunnarxYes, that would be logical, yes.13:18
CTtpollardmaybe they're trying unpack what the qemu pipeline used an failing in that way13:19
gunnarxShall I change that too?13:19
gunnarxCTtpollard, yes, sstate cache seems a suspect for this type of error right?13:19
pedroalvarezgunnarx: If you are ok with that path, then yes13:19
gunnarxYes I'm OK with it13:19
CTtpollardgunnarx: it looks like it's trying to fetch a remote snapshot, and a CA cert problem is tripping it13:20
gunnarxOK changed to /var/cache/yocto/sstate13:20
gunnarxWell we should trigger a pipeline in the meantime I guess, see if problem fixes itself :)13:21
*** ashwasimha_ has quit IRC13:21
pedroalvarez+113:22
gunnarxpedroalvarez, can you look at https://go.genivi.org/go/tab/build/detail/GDP-Yocto-RaspberryPi2/19/build/1/init_and_bitbake13:23
pedroalvarezgunnarx: "fatal: Remote branch raspberrypi2 not found in upstream origin"13:24
gunnarxah ok13:24
gunnarxmy bad then. hang on13:24
gunnarxSorry, thought it was something with your local agent.  All of them now set to the master branch13:26
CTtpollardI'm sharing sstate cache locally across targets, no issue as of yet sharing rust13:28
gunnarxCTtpollard, sure but the agent sstate might be corrupt somehow13:29
CTtpollardgahh, going to have to reboot13:36
*** CTtpollard has quit IRC13:37
*** CTtpollard has joined #automotive13:38
gunnarxWelcome back.  At least you reboot in under one minute...13:39
CTtpollardand that's without an ssd!13:39
gunnarxI feel sorry for you now.13:41
CTtpollard'mmc0: error -110 whilst initialising SD card'13:41
CTtpollardty monday13:41
gunnarxoh no13:42
fredcadeteshared sstate killed your ssd!13:42
gunnarxsd13:42
fredcadeteeven so13:42
fredcadetethe murderer13:43
gunnarxOK some bugs after I hacked around, was missing some MACHINE environment variables.  But now the build runs.  Lots of setscene going on, which is nice to see13:44
gunnarxLots of setscene13:47
gunnarxoops13:47
gunnarxrepeat13:47
*** CTtpollard has quit IRC13:47
*** CTtpollard has joined #automotive13:49
*** CTtpollard has quit IRC13:52
*** CTtpollard has joined #automotive13:56
pedroalvarezgunnarx: can we kill ci_test now :D14:05
pedroalvarez?14:05
gunnarx?14:05
gunnarxyou mean all of the bizarre failures currently happening? :)14:06
pedroalvareznope, the resource needed in some pipelines14:06
pedroalvarezct_test, I mean14:06
pedroalvarezyocto_build assumes /var/cache/yocto/* ?14:07
CTtpollardI've asked in #yocto is you can specify specific packages to ignore sstate cache14:09
CTtpollard* if you14:09
gunnarxoh ct_test!  big difference :)14:12
gunnarxyes I think you can pretty much nuke it.  I am using yocto_build as the preferred resource for the bigger agents that can handle it14:12
fredcadeteincidentally, I was having a look at the Renesas packages that fail when sharing sstate. For vspmif I have found the issue and I'm trying a fix. If we are lucky it's the same issue in the othe packages14:17
fredcadeteCTtpollard: for which package were you trying to avoid sstate specifically?14:17
*** toscalix has quit IRC14:17
CTtpollardfredcadete: it's stemming from rust, specifically rustc by the looks of it14:18
fredcadeteoops, that one I don't have on my setup14:18
CTtpollardgunnarx: if it fails again, I propose nuking the rust work in sstate14:20
CTtpollardhopefully it's just a corruption, I can't replicate even on the agent14:20
pedroalvarezCTtpollard: could you replicate in the agent by reusing the sstate cache?14:21
gunnarxOK. I propose even nuking all of sstate if that's the thing.  Make a new fully clean build.14:21
CTtpollardpedroalvarez: not tried that yet, I'll start one now14:21
CTtpollardthat narrows out a network issue though14:21
gunnarxI didn't understand a word of the answer you got in #yocto :)14:22
fredcadetebtibake rustc -c cleansstate may be faster to try than nuking the whole sstate14:23
fredcadeteand gunnarx, me neither...14:23
gunnarxoh good :)14:23
leon-anaviCTtpollard, most probably you have experienced a temporary network issues.14:23
CTtpollardmonday has been more testing than usual14:37
* CTtpollard is glad it's Manchester Beer Week 14:37
*** gan has joined #automotive14:39
*** gunnarx has quit IRC14:41
ganCTtpollard, we've been changing stuff around in the pipelines a bit, and also having issues with the shared downloads and cache on the other agent.  But this is on CT agent, I have no real explanation.  Maybe you can investigate https://go.genivi.org/go/tab/build/detail/GDP-Yocto-Minnowboard/30/build/1/init_and_bitbake14:42
ganA little suspicious failure on navigation service when FSA was just merged?14:42
*** gan is now known as gunnarx14:43
*** gunnarx has left #automotive14:49
leon-anaviThis is the first time I hear about "Manchester Beer Week" but it sounds fantastic :)14:54
*** toscalix has joined #automotive14:58
CTtpollardgan: would look that way14:59
CTtpollardthe error does not look like it would be arch specific though15:01
*** jlrmagnus has joined #automotive15:01
*** mvick has joined #automotive15:06
*** jlrmagnus has quit IRC15:14
CTtpollardok finally15:16
pedroalvarezreproduced?15:17
CTtpollardusing the same sstate, user, etc on the agent. same failure when trying to compile rvi-sota-client15:17
CTtpollardso I'd say clean it using bitbake cleansstate, or nuke the whole contents15:18
CTtpollardand see if it happens again after it has populated it once for a target15:19
CTtpollardI'll go with the first option and try again15:21
pedroalvarezyup15:21
CTtpollardactually, I'll do it for rust in general15:21
leon-anaviCTtpollard, do you have this error when you build it from scratch?15:28
CTtpollardtrying to confirm that now, but last time no15:29
CTtpollardcleansstate on rust & rust-cross-x86_64 both made no difference in the shared sstate15:30
*** jlrmagnus has joined #automotive15:31
pedroalvareztime to nuke it all?15:33
pedroalvarezor.. maybe is just the website not allowing too many downloads at the same time?15:33
CTtpollardI'm trying one last thing15:33
kooltuxdl9pf_, ping15:34
CTtpollardit would be interesting to have a clean sstate, build a non qemu PR pipeline first, and then see if it happens to others after15:34
*** jobol has quit IRC15:38
*** rajm has quit IRC15:42
*** jonathanmaw has quit IRC15:45
pedroalvarezCTtpollard: let's do that?15:46
CTtpollardhang fire!15:46
CTtpollard:)15:46
pedroalvarezright, I'll let you do it :)15:47
CTtpollardI probably won't be able to do it until the morning15:49
*** fredcadete has quit IRC15:57
*** nisha has quit IRC15:59
*** ctbruce has quit IRC16:24
leon-anaviCTtpollard, regarding these circumstances what might have caused the issue?16:27
toscalixleon-anavi: CTtpollard went home, I think16:31
leon-anaviok16:34
leon-anaviI will catch him tomorrow :)16:34
leon-anavisorry that I cannot help much but I was unable to reproduce the issue16:34
*** AlisonChaiken has quit IRC16:35
*** Saint_Isidore has joined #automotive16:36
*** AlisonChaiken has joined #automotive16:53
toscalixleon-anavi: np We will put more effort tomorrow16:59
leon-anaviok, I will be around in IRC to support you with what I can16:59
*** leon-anavi has quit IRC17:25
*** Tarnyko has quit IRC17:53
*** Tarnyko has joined #automotive18:24
*** jlrmagnus has quit IRC18:47
*** jlrmagnus has joined #automotive18:48
*** toscalix has quit IRC19:21
*** nisha has joined #automotive19:26
*** kooltux_ has joined #automotive20:03
*** kooltux_ has quit IRC20:11
*** kooltux_ has joined #automotive20:26
*** kooltux_ has quit IRC20:31
*** jlrmagnus has quit IRC21:13
*** jlrmagnus has joined #automotive21:37
*** jlrmagnus has quit IRC22:12
*** jlrmagnus has joined #automotive22:32
*** Tarnyko has quit IRC22:44
*** Tarnyko has joined #automotive22:47
*** jlrmagnus has quit IRC23:28
*** kooltux_ has joined #automotive23:30
*** kooltux_ has quit IRC23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!