IRC logs for #baserock for Friday, 2015-09-04

WalkerdineI can't seem to get my sata drive to connect to the jetson01:29
WalkerdineIt shows up on my computer just fine but it comes up with errors when I hook it up to the jetson01:30
WalkerdineI just have endless problems ha01:39
WalkerdineI'm getting abunch of COMRESETS when I hook my sata drive up01:41
*** zoli__ has joined #baserock07:20
*** fay_ has joined #baserock07:52
*** toscalix__ has joined #baserock08:01
*** toscalix__ is now known as toscalix08:02
rjekWalkerdine: I've seen that with faulty SATA leads; have you tried using the same cable you used to connect it to your PC?08:11
*** jonathanmaw has joined #baserock08:12
*** rdale has joined #baserock08:20
*** petefoth_ has joined #baserock09:01
*** petefoth has quit IRC09:03
*** petefoth_ is now known as petefoth09:03
pedroalvarez<paulsherwood> infrastructure/extensions/simple-network.configure requires cliapp, so ybd can't deploy it09:56
pedroalvarezpaulsherwood: this might be because it's a fork and it's not up to date09:57
paulsherwoodpedroalvarez: yup. i was only trying it since definitions seems to have no 'subsystem' examples09:57
*** petefoth_ has joined #baserock09:59
SotKpaulsherwood: definitions/clusters/installer-build-system-x86_64.morph has subsystems09:59
*** petefoth has quit IRC09:59
*** petefoth has joined #baserock10:02
*** petefoth_ has quit IRC10:03
paulsherwoodit does? i thought i came up empty on grep10:04
*** edcragg has joined #baserock10:04
* paulsherwood wonders what he must've been smoking10:05
rjekWashing powder?10:07
Kinnisontoenails10:07
pedroalvarez:)10:11
pedroalvarezI believe clusters/release.morph also has subsystems10:11
* pedroalvarez remembers we have added xfce to ci.morph10:12
tiagogomes_the openstack clusters also have subsystems10:12
pdarpaulsherwood: Heya, was following your "... ybd on aws" instructions at w.b.o/ybd and got stuck. Ever seen this before? http://paste.baserock.org/mevucuyuku10:15
richard_mawouch, fun! could be one of your dependencies bumped their version requirement10:18
richard_mawat which point you might be able to get around it by being more specific about the version you need10:18
richard_maw(which tbh I'd recommend anyway)10:18
paulsherwoodpdar: what is your pip version?10:19
*** rdale has quit IRC10:19
paulsherwoodalso, which ami are you using?10:19
Kinnisonpaulsherwood: we're using the same AMI you said you'd picked10:20
Kinnisonpaulsherwood: in the same size instance you said you'd used10:20
paulsherwood :)10:20
*** rdale has joined #baserock10:20
pdarhmm, the pip version is 7.1.210:20
SotKis `python` python2 or python3 here?10:21
richard_mawpdar: ΓΈ_O that means something you depend on is *hard* depending on an older version10:21
paulsherwoodpdar: do the pip installs separately, work out which is misbehaving10:22
Kinnisona commandline tool which tracebacks to indicate issues is not a good tool IMO10:22
* paulsherwood didn't choose python for baserock tooling10:22
* paulsherwood didn't choose pip as the default installer for python tools :)10:23
pdarhmm, ok, I tried to use pip install to degrade pip to the 6.1.1 version but pip played up with the same error10:23
* SotK vaguely recalls seeing something like that when he was using pip with python3 in the past10:23
richard_mawpaulsherwood: that may not necessarily generate the same result, as it may be attempting to install the whole set based on the sum of the constraints, while each individually would be satisfiable10:23
paulsherwoodrichard_maw: ack10:24
paulsherwoodit would clearly be preferable to have a baserock ami, but i found that too complicatated to attempt10:24
KinnisonI'm working on that10:24
paulsherwood:)10:24
KinnisonI *believe* I can get something up10:24
pedroalvarez:D10:24
richard_mawit's become way easier since I last looked at what was required10:24
Kinnisonjust as soon as pdar can get a build going on this EC2 instance10:24
paulsherwoodbelief is very important10:24
* pedroalvarez whispers: "do not build, you only need to deploy nowadays"10:26
paulsherwood?10:26
pedroalvarezmason + cache.baserock.org :)10:26
paulsherwoodybd's cache is not compatible with that, sadly10:27
paulsherwoodand ybd needs atomicity and some other things for caching10:27
paulsherwoodon my aws machine...10:28
paulsherwoodPython 2.7.910:28
paulsherwoodpip 6.0.810:28
Kinnisonwe seem to have pip 7 installed in /usr/local10:28
KinnisonI could probably blat that out10:29
paulsherwooddid pdar install that, or was it already there?10:29
* paulsherwood will re-run and fix the instructions at some point, or pdar could if he would like to10:29
Kinnisonpdar: did you upgrade pip?10:30
pdarI did not upgrade pip, just used the `get-pip.py` to get pip10:33
* paulsherwood notices that in his shell history on that machine he actualy did yum install pip, not what is written on the wiki10:33
pdarI think10:33
Kinnisonpaulsherwood: tsk10:33
Kinnisonpdar: blat the pip you installed with that, and use 'yum install pip'10:34
Kinnisonpdar: I can do that if you want10:34
paulsherwoodKinnison: we all make mistakes10:34
Kinnisonpaulsherwood: aye10:34
paulsherwoodi shut down a power station once10:34
Kinnisonpaulsherwood: in a previous job, public mistakes led to the doughnut hat10:34
* Kinnison dreads to think how hard it'd be to distribute doughnuts to all of #baserock10:35
paulsherwood:)10:35
pdarKinnison: how should I effectively blat the current pip?10:35
* richard_maw is a fan of `find /usr/local/foo -delete`10:36
Kinnisonumm10:36
Kinnisonthat might be slightly heavyweight10:36
paulsherwoodpython -m pip uninstall ?10:36
Kinnisonoooh10:37
* Kinnison manages it10:37
Kinnisonwith repeated 'python -m pip uninstall pip'10:37
Kinnisonoooh yay pip10:38
Kinnisonit removed the one in /usr10:38
Kinnisonas well as the one in /usr/local10:38
* richard_maw slow claps10:39
* Kinnison beats the install over the head with some yum --force10:39
* rjek mutters about pip10:39
Kinnisonpdar: okay, pip 6.1.1 is installed10:39
Kinnisonpip is worse than cabal10:39
pdaryay! thanks Kinnison10:39
richard_mawKinnison: well, cabal is at least functional10:39
* richard_maw ducks10:39
Kinnisonrichard_maw: Right, you owe me lunch10:40
richard_mawI was off for a burrito now-ish, you're welcome to join me, or I can settle up later.10:41
Kinnisonheh10:41
Kinnisonanother day10:41
pdarthanks for helping paulsherwood Kinnison richard_maw10:55
Kinnisonpdar: you're very welcome10:56
* paulsherwood apologises for the dodgy instructions10:57
Kinnisonpaulsherwood: have you corrected them?10:57
*** zoli__ has quit IRC10:58
pdarIve updated them now10:59
pdarIf I run a build with ybd should it utilise all the resources of the monster machine without any special input?11:00
Kinnisonpaulsherwood: ^^^11:01
radiofreeIt should yes11:01
radiofreeI think it'll tell you at the start what it's going to pass to -j11:02
pdarahh yes, so it does. ta radiofree11:02
* Kinnison watches CPU utilisation jump to nearly 10%11:03
paulsherwoodno.11:03
Kinnisonand network IO jumps way higher11:03
paulsherwoodyou need to create ybd.conf in its home directory (or modify config/modify.conf) and add: instances: 511:04
paulsherwoodpdar: kill it11:04
paulsherwoodyou probably also want to set base: '/src' (assuming you're doing the work on a separate volume)11:04
pdarit has been vanquished11:05
pdarI set the `base: /src` thing11:05
Kinnisonwe have a /src volume yes11:05
paulsherwoodand you'll want to rm -fr /root/.cache/ybd11:05
paulsherwoodwhich is its default base11:05
radiofreepaulsherwood: I didn't have to do that on the mustang11:06
paulsherwoodradiofree: did you have a /src/ partition?11:06
radiofreeYep11:06
radiofreeNo I mean set the number of instances11:06
paulsherwoodradiofree: no, fair enough. how many cores does that have?11:07
radiofree811:07
paulsherwoodwell, probably no advantage to adding instaces11:07
pdarhow many instances should i give for, is it 40 cores?11:07
paulsherwoodbut for the biggest aws machines, 5 instances is best11:07
pdarcool, 5 it is11:08
paulsherwoods/best/fastest to complete a full build of ci.morph/11:08
radiofreeah right, so instances will create 5 instances of ybd11:09
radiofreeis it smart enough to split the cores between them?11:09
radiofreee.g -j10 for each or something11:10
paulsherwoodyup11:10
radiofreeneat11:10
paulsherwoodhttps://github.com/devcurmudgeon/ybd/blob/master/app.py#L12611:11
paulsherwoodi think there's room for more optimisation though11:11
paulsherwoodanyone looking at the output will cringe to see it start building loads of the same stuff five times in parallel11:12
*** zoli__ has joined #baserock11:34
* richard_maw is ready to test firehose in the chunky AWS instance, as soon as he has details for the test trove12:39
*** zoli__ has quit IRC13:04
paulsherwoodi'm trying to 'improve' ybd, but my 'improvement' is making glibc fail with 'gcc: fatal error: environment variable 'STAGE2_SYSROOT' not defined'13:08
paulsherwoodi'm guessing this means it's got the wrong gcc somehow?13:08
richard_mawdepends where in the build it is13:09
richard_mawif it's part of the bootstrap then you're failing to set an environment variable13:09
richard_mawotherwise, yes, it's using a bootstrap GCC instead of a proper self-hosted one13:09
paulsherwoodi'm building build-system. i believe there's only one [glibc] ?13:09
richard_mawpardon?13:10
richard_mawoh, only building one glibc13:10
paulsherwoodyup, as opposed to stage*glibc13:10
richard_mawin which case, yes, it's using the wrong gcc. I'd guess it's picked the wrong gcc out of the cache, or you're installing dependencies you shouldn't over the top of ones you should13:12
*** zoli__ has joined #baserock13:13
*** Walkerdine_ has joined #baserock13:19
Walkerdine_rjek: I plugged it into my pc and it worked fine13:21
Walkerdine_Maybe my jetson doesn't like the sata cable? Idk13:22
paulsherwoodrichard_maw: tvm, i'll probe further13:23
rjekWalkerdine_: Using the same cable?13:24
Walkerdine_yeah13:24
Walkerdine_rjek: Should I try with a different cable on the jetson or because its the same one it should work?13:29
rjekWalkerdine_: What's the precise error you're seeing from the Jetson with the SSD connected?13:30
Walkerdine_Its saying that the link is slow, please wait13:30
Walkerdine_and failed to IDENTIFY'13:30
Walkerdine_rjek: Do you think using a different SSD would help?13:43
rjekWalkerdine_: I can't really suggest anything without knowing what the actual error is13:43
rjekIt could be that the is a bug in either/both of the kernel driver or SSD firmware, could be browning out on the supplied power rails, etc13:44
Walkerdine_The only error I know of is that its saying that the link is slow to respond13:46
Walkerdine_ata1: link is slow to respond, please be patient (ready=0)13:47
Kinnisonis the drive well powered?13:47
* rjek is thinking that it may be browning out, then13:47
rjekHow is the drive being powered?13:48
Walkerdine_I happened to use the same power source when I plugged it into my computer13:48
rjek(Does the Jetson have a SATA power connector, and does it provide both 5V and 12V rails?)13:48
* wdutch wonders if SimpleHTTPServer will be okay for the CIAT fileserver13:48
rjekWalkerdine_: Is any of this helpful? https://devtalk.nvidia.com/default/topic/830349/jetson-tk1-and-sata-drive-issue/13:48
Walkerdine_which is the jetson's stat power13:48
rjekWalkerdine_: Do you have another Jetson to try?  That page suggests it may be build variance.13:49
rjek(ie, some Jensons have marginal SATA)13:49
Walkerdine_I do not but I might have to get one13:50
Walkerdine_I don't see those exact messages but very similar ones. The two things I'm seeing suggested is that its either a kernel issue or the hardware is bad. I'm guessing the hardware is bad13:54
richard_mawKinnison: I take it that the cu010-trove.codethink.com trove is configured to allow pushes to br6/foo branches, given we're in the br6 groups14:03
Kinnisonit might be cu010-trove/br6/14:03
Kinnisongiven the trove-prefix concept14:03
Kinnisonit's only a temporary place while we prototype this stuff14:03
* richard_maw had conflated the trove-prefix and the project prefix14:04
richard_mawI guess the most reliable way to find out is push something, given I'm not in a group that would allow inspecting the gitano config.14:04
Kinnisonwell, try creating a project :-)14:05
* richard_maw thought the way to do that was create the group structure14:06
Kinnisonrpeo14:08
Kinnisonrepo14:08
KinnisonI meant repo14:08
* Kinnison dies of: inability to type in public14:08
richard_mawKinnison: I'm more interested in being able to push refs to existing projects, which appears to work.14:10
Kinnisonokies14:10
richard_mawKinnison, wdutch, pdar: $ git ls-remote ssh://git@cu010-trove.codethink.com/baserock/baserock/definitions.git | grep br614:42
richard_maw793d48720ed057370a2b939bfdb163133d577c10refs/heads/cu010-trove/br6/firehose-test-114:42
Kinnisonnice14:42
Kinnisonthough you don't *need* the br6 in that refname :-)14:43
* richard_maw shrugs14:43
Kinnisonrichard_maw: once our ops chappy has opened the firewall, people should be able to use git:// for that too :-)14:43
KinnisonApparently I'm the first person to ask for an "everything from everywhere" firewall rule :-)14:44
richard_mawI get the impression I was expected to change the morph config to change what baserock:baserock/definitions resolved to, rather than putting ssh://git@cu010-trove.codethink.com/baserock/baserock/definitions.git in the repo field for the firehose config14:47
Kinnisonprobably14:47
Kinnisonset trove-id to cu010-trove14:47
Kinnisonand trove-hostname to cu010-trove.codethink.com14:47
Kinnisonand that should do most of it14:47
richard_maweh, it worked after I changed how it determined the path to the definitions repository14:48
Kinnisonhehe14:48
Walkerdine_I'm really hoping I can contribute something once I get this all set up15:02
KinnisonWalkerdine_: even if you contribute only some notes on how to get started, it'll be helpful15:02
KinnisonWalkerdine_: every contribution is gratefully received15:02
Walkerdine_Whether it be baserock or genivi15:02
*** rdale has quit IRC15:03
Kinnisonokay, so the rule change didn't make stuff public15:05
Kinnisongimme a sec15:05
KinnisonBingo http://cu010-trove.codethink.com/cgi-bin/cgit.cgi/cu010-trove/br6/orchestration.git/15:08
paulsherwoodwdutch: what fileserver?15:11
paulsherwoodif you mean cache server, no it won't imo15:12
paulsherwood(ie serving artifacts from builds, tests etc)15:12
wdutchpaulsherwood: yes that's what I meant15:13
paulsherwoodwdutch:  Kinnison, richard_maw or others may disagree, but i'm pretty sure it needs to deal with atomicity of inbound content submissions, plus scale to potentially lots (100s?) of simultaneous requests for potentially large artifacts in a non-blocking way15:15
paulsherwoodmy starter-for-ten is based on bottle: https://github.com/devcurmudgeon/ybd/blob/master/kbas.py15:15
paulsherwoodit handles atomic, and i believe it can easily be configured to scale (using cherrypy or gevent) but that will require testing15:16
paulsherwoodalso it already showed up a weakness in ybd's sandbox.install which i'm looking at15:17
rjekThis strikes me as something that could be a CGI and just scale easily15:17
paulsherwoodrjek: you might be right, i'm in unknown waters here15:18
paulsherwoodrjek: however, i have a use-case for ybd where jrandom developer would like to turn his/her machine into a temporary cache server15:19
rjekNod15:19
paulsherwoodand another where a group of ybd users cause their machines to act together sharing caches15:19
paulsherwoodso triggering some python seemed easiest to me for my first attempt15:20
* paulsherwood wonders how pdar's build got on15:20
richard_mawpaulsherwood: have it announce its cache server over mdns!15:22
* richard_maw ducks15:22
Kinnisonrichard_maw: you can duck, but you can't hide, your avahi is advertising your presence15:23
Walkerdine_I'm happy I work right next to a microcenter so I can get another jetson without hassle15:23
richard_mawhahahaha15:23
paulsherwoodrichard_maw: what is mdns?15:30
richard_mawmulticast dns, name resolving without a central server15:32
richard_mawyou can also advertise services through it15:33
paulsherwoodrichard_maw: is this a realistic possibility, or a friday wind-up? :)15:36
* paulsherwood genuinely doesn't kow15:36
paulsherwoodknow15:36
richard_mawmostly a wind-up, hence ducking after suggesting it15:37
richard_mawit's used a bit in nodejs communities15:37
paulsherwoodrichard_maw: 'mostly' suggests there might be something in this idea?15:37
paulsherwoodit sounds a bit like the spartacus protocol idea, for real?15:38
* paulsherwood notes that the 'herd of ybd' idea was ridiculous... until it worked :)15:39
Kinnisonpaulsherwood: long ago I mooted using mdns or similar to advertise for distributed build stuff15:39
paulsherwoodKinnison: ok. what happened to the mooting?15:39
Kinnisonpaulsherwood: it's still ridiculous, it's just that daft things sometimes have useful behavioursin the absence of better things15:39
Kinnisonpaulsherwood: time15:39
Kinnisonpaulsherwood: I was in Korea at the time, if that helps you date it15:40
paulsherwoodok, shall we re-moot it?15:40
KinnisonNot right now, no15:41
richard_mawyou pretty much need it to be a trusted network to safely use it, which limits its usefulness15:42
Kinnisonindeed15:42
paulsherwoodok15:44
richard_mawif you had a bunch of build boxes you could pre-configure to advertise on a physical ethernet port then you could make them plug and play with each other, but you wouldn't want to advertise even on your home or office network really15:44
paulsherwoodk15:46
richard_mawwhich is one of the reasons why Windows 10's offering of software updates via a similar mechanism is scary15:46
richard_mawthough if the updates are appropriately signed, it's less of a risk15:47
paulsherwoodwhich brings us (potentially) to signing artifacts etc15:48
richard_mawpotentially, though that would only buy you sharing of pre-signed artifacts, you'd have to securely build your web of trust between your builders if you wanted to do that15:48
richard_mawwhich just moves the authentication problem out of band15:49
paulsherwoodack15:49
Walkerdine_speaking of artifacts once I finally get my first build working can I just copy them to use for another jetson?15:50
paulsherwoodyup15:50
Walkerdine_Awesome15:50
pdarpaulsherwood: heya, the build worked fine in the end, built a build-system in 1hr40mins. speedy!15:52
Kinnisonthat's surprisingly long compared with what paul suggested it might take15:52
paulsherwoodwhich system?15:52
pdarbuild one i think15:52
paulsherwoodoh, build-system. yup, that seems slow15:52
pdarill check15:52
paulsherwoodpdar: can you confirm it's a mx.10xlarge machine?15:53
paulsherwoodoh, also that includes probably up to 30 mins loading gits15:54
pdaryep, twas the build system, it did spend a bunch of time getting gits15:55
Kinnisonaah yeah, getting the gits will slow it down15:57
paulsherwoodi did think about parallelising that, too, but decided it's a once-off load for most use-cases15:59
*** CTtpollard has quit IRC16:25
*** jonathanmaw has quit IRC16:25
*** franred has quit IRC16:43
*** zoli__ has quit IRC16:44
*** zoli__ has joined #baserock16:44
*** Walkerdine_ has quit IRC16:48
jjardonSotK: hi, can you abandon https://gerrit.baserock.org/#/c/251/ and https://gerrit.baserock.org/#/c/252/ now that xfce systems got fixed? :)16:58
*** toscalix has quit IRC17:05
*** mdunford has quit IRC17:25
paulsherwoodjjardon: for 'fixed' do you mean they work now?17:28
jjardonpaulsherwood: in a vm yes, in real hardware not yet because there is a problem with the logind configuration that doesn't allow the Intel driver access the hardware. Didn't have time to look deeper yet17:36
paulsherwoodwell, that's still great news :)17:36
pedroalvarezjjardon: :)17:38
pedroalvarezoh, you have improved brpaste! :D17:39
jjardonpedroalvarez: only a little; next step python3 by default in baserock systems ;)17:42
jjardon(Will send that as a RFC when is ready)17:42
*** edcragg has quit IRC17:45
*** Walkerdine_ has joined #baserock18:37
*** bfletcher has quit IRC20:16
*** zoli__ has quit IRC20:16
*** Walkerdine_ has quit IRC20:51
*** bfletcher has joined #baserock21:03
*** Walkerdine_ has joined #baserock23:09
*** Walkerdine_ has quit IRC23:35
*** Walkerdine_ has joined #baserock23:49

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!