*** zoli__ has joined #baserock | 06:54 | |
*** rdale has joined #baserock | 07:00 | |
*** mdunford has joined #baserock | 08:03 | |
paulsherwood | Walkerdine: sorry, i was not... did you get it working? | 08:15 |
---|---|---|
paulsherwood | cycle.sh should work on jetson | 08:15 |
radiofree | Walkerdine: you can use system-version-manager to set the system back to factory | 08:16 |
radiofree | system-verison-manager set-default factory | 08:16 |
radiofree | then reboot, or select "factory" from the boot menu (via serial) | 08:16 |
*** petefoth_ has joined #baserock | 08:17 | |
*** petefoth has quit IRC | 08:17 | |
*** petefoth_ is now known as petefoth | 08:17 | |
radiofree | regarding system-version-manager, which version do i have to use for "--json" to work? | 08:18 |
* paulsherwood has no idea, sorry | 08:19 | |
paulsherwood | some tidyup is required here, i think | 08:19 |
radiofree | http://git.baserock.org/cgi-bin/cgit.cgi/baserock/baserock/definitions.git/tree/extensions/ssh-rsync.check#n70 seems to require it | 08:19 |
Kinnison | radiofree: it was something ripsum was working on | 08:19 |
Kinnison | radiofree: it's possible the corresponding change is still in gerrit | 08:20 |
radiofree | ah, yeah, tbdiff master didn't work | 08:20 |
*** gary_perkins has quit IRC | 08:34 | |
*** gary_perkins has joined #baserock | 08:39 | |
*** mdunford has quit IRC | 08:45 | |
*** jonathanmaw_ has joined #baserock | 08:53 | |
*** mdunford has joined #baserock | 08:59 | |
*** flatmush has quit IRC | 09:04 | |
*** flatmush has joined #baserock | 09:17 | |
*** mdunford has quit IRC | 09:33 | |
*** mdunford has joined #baserock | 09:45 | |
*** toscalix has joined #baserock | 10:32 | |
*** mdunford has quit IRC | 10:40 | |
*** fay_ has quit IRC | 10:42 | |
*** mdunford has joined #baserock | 10:54 | |
*** fay_ has joined #baserock | 10:57 | |
paulsherwood | i've started writing some stuff about ybd on the wiki... http://wiki.baserock.org/ybd | 10:59 |
Kinnison | handy | 10:59 |
paulsherwood | scaleway is slow, but interesting as an elastic cloud for arm builds :) | 11:00 |
Kinnison | what is scaleway? | 11:00 |
paulsherwood | there was a strange compile error using their debian + ubuntu images though | 11:00 |
paulsherwood | hence fedora is the recommended choice there | 11:00 |
Kinnison | Welcome to the combinatorial explosion | 11:01 |
paulsherwood | https://www.scaleway.com | 11:01 |
petefoth | Kinnison: accordign to their web site 'Scaleway is the first IaaS provider worldwide to offer an ARM based cloud' | 11:01 |
paulsherwood | a load of baserock slabs in the cloud, basically :) | 11:01 |
paulsherwood | well, same cpu at least | 11:01 |
paulsherwood | their own design | 11:02 |
Kinnison | petefoth: well that's ultra-informative¡ | 11:02 |
paulsherwood | you get baremetal 4 core 2GB build machine | 11:02 |
Kinnison | yeah that's not going to be the quickest of build farms | 11:03 |
Kinnison | but handy | 11:03 |
paulsherwood | easy to use, with an api | 11:03 |
radiofree | the mustang on my desk could be used to provide arm artifacts? | 11:03 |
paulsherwood | Kinnison: i've not had chance to try splitting workloads across multiple machines there yet, but i expect it to be 'interesting' | 11:04 |
radiofree | it builds a devel image in 4 1/2 hours compared to our current fastest option, the jetson, which takes 7 hours | 11:04 |
radiofree | (devel-system-armv7lhf-jetson) | 11:04 |
rjek | ISTR that the storage on Scaleway is Network Block Device. | 11:04 |
rjek | So it's not going to be fast. | 11:04 |
* Kinnison lunches | 11:04 | |
paulsherwood | and i keep thinking abot to previous discussions with radiofree and others about distcc | 11:04 |
paulsherwood | rjek: correct. but SSD | 11:04 |
rjek | paulsherwood: The SSD doesn't really matter when you involve the latency of TCP | 11:04 |
rjek | SSDs are fast because they don't have a seek time. | 11:05 |
radiofree | could we use a moonshot node for building armv7 images? | 11:06 |
radiofree | the steps to do it would be exactly the same as the mustang | 11:06 |
paulsherwood | https://raw.githubusercontent.com/devcurmudgeon/build-logs/master/scaleway.genivi-baseline.txt | 11:10 |
paulsherwood | radiofree: yes, we could | 11:10 |
paulsherwood | i haven't got round to timing anything on moonshot | 11:11 |
paulsherwood | radiofree: thanks for your log, i'll add it to the collection :) | 11:11 |
radiofree | i have one from a jetson if you want that | 11:13 |
paulsherwood | yes please. i couldn't connect to jetson last time i tried | 11:13 |
paulsherwood | s/jetson/my jetson/ | 11:13 |
radiofree | biff | 11:19 |
paulsherwood | tvm | 11:20 |
paulsherwood | i've added them to https://github.com/devcurmudgeon/build-logs | 11:37 |
* paulsherwood wants to do some comparative graphs | 11:40 | |
*** fay_ has quit IRC | 12:24 | |
*** fay_ has joined #baserock | 13:38 | |
paulsherwood | Kinnison: not sure if you noticed the ybd artifact publication discussion | 13:50 |
Kinnison | paulsherwood: I noticed you mention you'd written a basic artifact service | 13:50 |
Kinnison | paulsherwood: but that you'd not tested any kind of multithreaded behaviour | 13:50 |
paulsherwood | it's an interesting possibility that any ybd machine could permanently or temporarily become an artifact server | 13:50 |
paulsherwood | Kinnison: i did test multithreaded, and proved it doesn't fully work yet :) | 13:51 |
Kinnison | heh | 13:51 |
Kinnison | I think bottle has a simple multithreaded server support | 13:51 |
Kinnison | or possibly multiprocess | 13:51 |
paulsherwood | but my point is that there would be advantages to this concept... eg radiofree could do a build, and then offer artifacts from his desktop to be collected by interested server(s) | 13:52 |
Kinnison | Within a security domain, sure | 13:52 |
Kinnison | beyond that, I have reservations about safety | 13:52 |
paulsherwood | Kinnison: bottle is multithreader but blocking | 13:52 |
Kinnison | disk IO is likely blocking | 13:52 |
paulsherwood | it can be run with gevent or cherrypy to be non-blocking (config flag) | 13:53 |
Kinnison | multiprocess will help with that | 13:53 |
paulsherwood | Kinnison: my current design is that the ybd server (kbas) needs to be explicitly run. it has a default password. it won't accept uploads unless the user has specified a different password from the default in its config | 13:54 |
paulsherwood | is that still too unsafe? | 13:54 |
Kinnison | plaintext passwords don't interest me | 13:54 |
paulsherwood | ack. i need to sort https | 13:55 |
paulsherwood | or we could require keys i guess? | 13:55 |
Kinnison | My gut feeling is either HTTPS with certificates, or ssh of some kind | 13:56 |
*** zoli__ has quit IRC | 14:00 | |
paulsherwood | Kinnison: i want something as trivial as possible, for jrandom ybd instance if at all possible | 14:00 |
paulsherwood | s/trivial/trivial to setup/ | 14:00 |
Kinnison | hmm | 14:01 |
Kinnison | https and shared secrets is likely easiest then | 14:01 |
paulsherwood | ack | 14:01 |
* Kinnison stares somewhat blankly at an AWS console | 14:01 | |
Kinnison | wow they have a lot of features | 14:01 |
paulsherwood | click on EBS topleft | 14:02 |
Kinnison | yeah | 14:03 |
Kinnison | which AMI did you use for your ybd testing? I'd like to replicate that as closely as possible | 14:03 |
paulsherwood | sorry EC2 | 14:03 |
Kinnison | "Amazon Linux AMI 2015.03.1 (PV) " ? | 14:04 |
Kinnison | or "Amazon Linux AMI 2015.03.1 (HVM), SSD Volume Type" ? | 14:04 |
paulsherwood | m4.10xlarge with the first option Amazon AMI | 14:04 |
Kinnison | nod. | 14:04 |
Kinnison | that's the HVM one | 14:05 |
paulsherwood | i can find the history of my commands if you like | 14:05 |
paulsherwood | Amazon Linux AMI 2015.03.0 x86_64 HVM GP2 | 14:05 |
Kinnison | did you slap a big EBS on yours for building? | 14:06 |
Kinnison | or just increase the root size? | 14:06 |
Kinnison | (it appears to be 8G by default on the m4.10xlarge) | 14:06 |
paulsherwood | i added a separate volume 100GB | 14:07 |
Kinnison | nod. | 14:07 |
* Kinnison now just needs to wait for his account verification to complete before he can launch | 14:10 | |
paulsherwood | Kinnison: lots of noise in this, but it's everything i've ever done on that machine http://paste.baserock.org/imumehugoz | 14:10 |
paulsherwood | (after sudo su) | 14:11 |
Kinnison | handy | 14:11 |
toscalix | Kinnison, Paul, what code should I put to the project, CT201? | 14:20 |
toscalix | paulsherwood, sorry | 14:21 |
* Kinnison assumes toscalix is aware he's leaking internals? | 14:21 | |
wdutch | for CIAT, will definitions be in trove? I'm not sure what trove is and isn't | 14:21 |
Kinnison | wdutch: yes, it will | 14:21 |
wdutch | Kinnison: ta | 14:21 |
Kinnison | wdutch: I currently have my ops team sorting out a trove for us to play wish, so we don't have to disturb git.baserock.org for our experimentation | 14:21 |
wdutch | toscalix: this is the diagram of orchestration, http://i.imgur.com/uK1kRtS.png Kinnison paulsherwood and pdar might want to sanity check it | 14:36 |
toscalix | wdutch, thanks. I will add it to the wiki. | 14:38 |
Kinnison | wdutch: poll for changes //-> triggered on changes | 14:38 |
Kinnison | wdutch: we can make trove ping the orchestration framework | 14:38 |
Kinnison | But that's essentially reasonable | 14:38 |
wdutch | I don't understand what you mean by //-> | 14:38 |
Kinnison | wdutch: orchestration can both poll and be triggered wrt. git changes | 14:39 |
wdutch | the buildbot way of doing things afaict is for buildbot to do what it calls polling and refers to as 'magically knowing' when the repo changes | 14:40 |
Kinnison | hmm | 14:40 |
Kinnison | so long as it supports being externally triggered | 14:40 |
Kinnison | because having it poll what will eventually become 1000s of repos would be a bad thing and might influence us away from buildbot | 14:40 |
wdutch | I was under the impression firehose would deal with the 1000s of repos and then update definitions, orchestration would be triggered by this change in definitions | 14:41 |
Kinnison | Something needs to trigger firehose | 14:41 |
Kinnison | I'm kinda hoping we can make that all the same orchestration framework | 14:42 |
Kinnison | to make life easier | 14:42 |
* wdutch will have to rethink then | 14:43 | |
wdutch | I thought firehose stoodalone in that respect | 14:43 |
Kinnison | nope, it needs (currently) to be caused to run | 14:43 |
Kinnison | either on a timer, or by triggers | 14:43 |
wdutch | okay | 14:44 |
pdar | Should the orchestration set up the aws machine for building things. In teh diagram that would meant the builder was the aws machine? | 14:49 |
Kinnison | Don't think about "the aws machine" | 14:50 |
Kinnison | remember we're aiming for elastic behaviour | 14:50 |
Kinnison | so there'll be "some number of machines" | 14:50 |
wdutch | I suppose the builders and testers will need to change size elastically, trove, the fileserver and orchestration not so much | 14:53 |
pdar | So, to me, will builder just be some scripts that do stuff somewhere then? | 14:57 |
wdutch | that describes the whole thing :P | 14:58 |
*** Walkerdine_ has joined #baserock | 15:06 | |
Walkerdine_ | Apparently I'm in the wrong timezone for this irc | 15:10 |
Walkerdine_ | channel | 15:10 |
wdutch | I didn't know that was a thing | 15:11 |
Walkerdine_ | Just to be able to reach people when they are around | 15:12 |
wdutch | ah | 15:13 |
Walkerdine_ | Do I need to do anything special to use a sata disk with my jetson | 15:14 |
Walkerdine_ | I flashed the baseline image onto my jetson and I have a sata drive hooked up but I want to work off of the SSD cause its faster | 15:15 |
radiofree | i use the ssd as my / | 15:16 |
radiofree | you'll have to manually partition the ssd for now, but when you deploy change "ROOT_DEVICE" to /dev/sdax instead of /dev/mmblk0p2 | 15:17 |
radiofree | BOOT_DEVICE should probably always be /dev/mmblk0p1 | 15:18 |
paulsherwood | Walkerdine_: most folks here are on eurotime, most of the time | 15:18 |
paulsherwood | but there are exceptions | 15:18 |
paulsherwood | how are you getting on, Walkerdine_ ? | 15:18 |
Walkerdine_ | Yeah I'm on Eastern time | 15:21 |
Walkerdine_ | I'm honestly not sure how to go about doing that | 15:22 |
Walkerdine_ | paulsherwood: I'm sorry I'm not sure if I understand your question | 15:24 |
paulsherwood | Walkerdine_: i'm mainly worried that this is turning into a 'slowstart' for you rather than a quick start :) | 15:25 |
Walkerdine_ | its mostly my fault because I'm rather new to linux | 15:25 |
paulsherwood | Walkerdine_: well, hopefully we can help... we were all new at some point :) | 15:26 |
paulsherwood | radiofree: do we have any simpler instructions for configuring storage for jetsons? | 15:27 |
Walkerdine_ | I'm sure once I get the change build repeat cycle going I'll be able to do everything on my own | 15:27 |
paulsherwood | i'm sure of that too :-) | 15:27 |
radiofree | paulsherwood: i do not | 15:28 |
radiofree | but that's the type of stuff you'd do outside of baserock anyway | 15:28 |
paulsherwood | ack | 15:28 |
paulsherwood | Walkerdine_: how big is your ssd? | 15:29 |
Walkerdine_ | 120GB | 15:29 |
*** fay_ has quit IRC | 15:51 | |
Walkerdine_ | Is manually partitioning the SSD sort of the same process of how the eMMC was partitioned during the flash? | 16:03 |
*** fay_ has joined #baserock | 16:11 | |
*** jonathanmaw_ has quit IRC | 16:33 | |
*** petefoth has quit IRC | 16:38 | |
*** pdar has quit IRC | 16:59 | |
*** bwh has quit IRC | 17:00 | |
*** fay_ has quit IRC | 17:00 | |
*** SotK has quit IRC | 17:01 | |
*** kejiahu has quit IRC | 17:01 | |
*** perryl has quit IRC | 17:01 | |
*** petefotheringham has quit IRC | 17:01 | |
*** benbrown_ has quit IRC | 17:01 | |
*** jmacs has quit IRC | 17:01 | |
*** paulsherwood has quit IRC | 17:01 | |
*** zoli__ has joined #baserock | 17:01 | |
*** Zara has quit IRC | 17:01 | |
*** sebh has quit IRC | 17:01 | |
*** bwh has joined #baserock | 17:03 | |
*** petefotheringham has joined #baserock | 17:03 | |
*** sebh has joined #baserock | 17:03 | |
*** gary_perkins has quit IRC | 17:03 | |
*** gary_perkins has joined #baserock | 17:04 | |
*** zoli__ has quit IRC | 17:05 | |
*** zoli__ has joined #baserock | 17:09 | |
*** flatmush has quit IRC | 17:10 | |
*** jmacs has joined #baserock | 17:13 | |
*** petefoth has joined #baserock | 17:50 | |
*** mdunford has quit IRC | 17:51 | |
*** toscalix has quit IRC | 18:08 | |
*** lachlan75 has joined #baserock | 18:14 | |
*** lachlan75 has quit IRC | 18:33 | |
*** SotK has joined #baserock | 19:10 | |
*** Walkerdine_ has quit IRC | 19:41 | |
*** paulsherwood has joined #baserock | 19:45 | |
paulsherwood | Walkerdine: sorry i missed your question, i got dumped off irc | 19:54 |
paulsherwood | yes, i believe it's the same process | 19:54 |
paulsherwood | basically you're aiming to treat the SSD as your ROOT_DEVICE, while booting from the kernel you've flashed to the eMMC | 19:57 |
paulsherwood | http://wiki.baserock.org/guides/baserock-jetson/ may or may not be clear enough, it's been a long time since i followed those instructions | 19:59 |
*** zoli__ has quit IRC | 21:23 | |
Walkerdine | paulsherwood: So when I deploy an upgrade using the cycle.sh does it boot temp from the SSD or the eMMC? | 23:11 |
Walkerdine | I'm probably too late for anyone to respond | 23:14 |
Walkerdine | darn | 23:14 |
radiofree | Walkerdine: u-boot will look for things in the emmc | 23:15 |
radiofree | so you *always* want to set BOOT_DEVICE to /dev/mmcblk0p1 | 23:16 |
radiofree | however, ROOT_DEVICE in that cluster can be anything | 23:16 |
radiofree | so if you've partitioned your SSD to have... let's say a 50G sda1 partition for / and RESTOFSPACE for /src, you set ROOT_DEVICE to /dev/sda1 | 23:16 |
radiofree | the flashing script you used to setup the baserock instance will create /dev/mcblk0p1, so you never have to worry about that once it's flashed | 23:17 |
Walkerdine | Then I create my workspace in /src and | 23:17 |
radiofree | it's usually better to create /src on a different partition, since / is supposed to be a btrfs one | 23:18 |
radiofree | and /src can be used between upgrades | 23:18 |
radiofree | the layout i use is a 50G / btrfs partition on sda1 | 23:18 |
radiofree | and RESTOFSPACE sda2 for /src | 23:18 |
radiofree | but 50G is too much, i use 50G because i do a *lot* of upgrades/deployments | 23:19 |
radiofree | this could probably do with a wiki article actually..... if i have some free time tomorrow i'll write it up | 23:19 |
Walkerdine | I will probably be doing alot of upgrades | 23:20 |
radiofree | how big is your ssd? | 23:20 |
Walkerdine | 120GB | 23:20 |
radiofree | ok, i'd say 20G / and 100G /src | 23:21 |
Walkerdine | I've got it up and running now how should I partition it out | 23:21 |
radiofree | make sure /src is a ext4 partition though | 23:21 |
radiofree | actually now that i think about it i don't think it's that easy with this tooling :\ I only know how to do this because i wrote the jetson flashing script | 23:21 |
radiofree | let me take a look tomorrow, I'll see if i can write an "experts" guide that explains what to do | 23:22 |
Walkerdine | Can I build just off of the eMMC | 23:22 |
radiofree | one of the things the flashing script does is to setup the emmc (/dev/mmcblk0*) in the correct way | 23:22 |
radiofree | Walkerdine: yes, if you used the flashing script that will be fine | 23:23 |
radiofree | just set BOOT_DEVICE to /dev/mmcblk0p1 and ROOT_DEVICE to /dev/mmcblk0p2 | 23:23 |
Walkerdine | How long does it usually take | 23:23 |
radiofree | sadly, on a jetson these days it takes about 7 hours to build the gdp from scratch | 23:23 |
radiofree | we used to provide ARM caches of these builds, but that got lost | 23:24 |
radiofree | i am actively working on restoring some infrastructure to provide that though | 23:24 |
radiofree | in an ideal world it would take you ~10 minutes to deploy a new devel image or gdp image | 23:24 |
radiofree | although if you're trying to deploy a GDP image to your jetson, then it should be pretty quick | 23:25 |
radiofree | since i uploaded the cache for that to the cache server | 23:25 |
Walkerdine | 7 hours! | 23:26 |
Walkerdine | Oh no | 23:26 |
radiofree | are you attempting to build the gdp image? | 23:26 |
Walkerdine | Yeah | 23:26 |
radiofree | ok hold on | 23:27 |
radiofree | it should be a lot quicker than 7 hours since i uploaded the cache for that | 23:27 |
radiofree | ~10 minutes | 23:27 |
radiofree | (to 'build' it) | 23:27 |
radiofree | so add this line to /etc/morph.conf | 23:28 |
radiofree | artifact-cache-server = http://cache.baserock.org:8080/ | 23:28 |
radiofree | then follow the "Build it yourself" instructions on http://wiki.projects.genivi.org/index.php/Hardware_Setup_and_Software_Installation/Jetson | 23:28 |
radiofree | if the cache hasn't been deleted it should download the prebuilt cache and it'll be quick | 23:29 |
Walkerdine | I gotta make sure to use the cycle.sh when I do build it | 23:29 |
radiofree | sure, cycle.sh is just a wrapper around morph deploy though | 23:29 |
Walkerdine | But doesn't it make it so I can boot back into the baseline? | 23:30 |
radiofree | do you have a serial connection to the board? | 23:30 |
radiofree | if you have that you'll get a boot menu | 23:30 |
radiofree | so you can always select previous deployments when you reboot | 23:30 |
radiofree | however, as long as you can login and get a console, you can use the `system-version-manager` command to set the default | 23:31 |
radiofree | system-version-manager list | 23:31 |
radiofree | usually for the devel system you'd do | 23:31 |
Walkerdine | What about when you just flash the gdp on it | 23:31 |
radiofree | system-version-manager set-default factory | 23:31 |
radiofree | then there is no devel image, i can build a GDP + devel image though | 23:31 |
Walkerdine | Ah I see | 23:31 |
Walkerdine | Okay that makes sense | 23:31 |
radiofree | GDP image is just... well the GDP | 23:31 |
radiofree | baserock is great, but we certainly don't explain it well :) | 23:32 |
radiofree | i'll try and rectify that with the GDP at some point in the next two weeks (currently busy on other things), certainly a GDP + devel image would make sense | 23:33 |
Walkerdine | I really think its just cause I'm still new to this whole thing | 23:37 |
radiofree | i don't know, i personally think it should be easier, I think it would be massively beneficial (at least for me) if you would post something to the mailing list (http://listmaster.pepperfish.net/cgi-bin/mailman/listinfo/baserock-dev-baserock.org) about your experiences with baserock | 23:41 |
radiofree | and your expectations as well | 23:41 |
Walkerdine | Yeah i could do that | 23:46 |
Walkerdine | Maybe once I get this up and going | 23:46 |
radiofree | it would be much appreciated :) | 23:47 |
radiofree | but yes, timezones dictate i have to head off now, hopefully the cache is still there | 23:47 |
radiofree | if not, it's a bug and i'll try and fix it tomorrow | 23:47 |
Walkerdine | Darn well thanks for the help | 23:50 |
Walkerdine | hopefully I can get this going today | 23:50 |
radiofree | yeah, if it's taken more than a day then there's clearly something wrong with the workflow/instructions, so it would be nice if you could share/rant on the mailing list about that | 23:51 |
radiofree | so we can fix it | 23:51 |
Walkerdine | The flashing instructions can use an update too but Ill tell you guys about that in the post | 23:52 |
radiofree | thanks | 23:52 |
Walkerdine | I had a few issues with the TEGRA_TOOLS_DIR that I took it out of the script | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!