*** zoli__ has joined #baserock | 02:20 | |
*** drnic has quit IRC | 05:35 | |
*** drnic has joined #baserock | 05:40 | |
*** paulsherwood has quit IRC | 05:40 | |
*** paulsherwood has joined #baserock | 05:41 | |
*** paulw has joined #baserock | 06:18 | |
*** mike has joined #baserock | 06:35 | |
*** mike is now known as Guest12343 | 06:35 | |
*** mariaderidder has joined #baserock | 07:25 | |
*** drnic has quit IRC | 07:31 | |
*** jjardon has quit IRC | 07:31 | |
*** drnic has joined #baserock | 07:43 | |
*** jjardon has joined #baserock | 07:53 | |
*** zoli__ has quit IRC | 07:57 | |
*** zoli___ has joined #baserock | 07:57 | |
*** Guest12343 has quit IRC | 08:02 | |
*** bashrc has joined #baserock | 08:04 | |
*** jonathanmaw has joined #baserock | 08:09 | |
*** jjardon has quit IRC | 08:10 | |
*** mdunford has joined #baserock | 08:15 | |
*** Guest12343 has joined #baserock | 08:17 | |
*** edcragg has joined #baserock | 08:22 | |
*** CTtpollard has joined #baserock | 08:46 | |
*** mike has joined #baserock | 09:06 | |
*** mike is now known as Guest34969 | 09:06 | |
*** Guest12343 has quit IRC | 09:07 | |
*** Guest34969 has quit IRC | 09:37 | |
*** Guest34969 has joined #baserock | 09:52 | |
*** paulw has quit IRC | 10:51 | |
*** paulw has joined #baserock | 10:52 | |
*** De|ta has quit IRC | 11:58 | |
*** tiagogomes_ has quit IRC | 12:22 | |
*** 18VAAA24X has joined #baserock | 12:23 | |
*** sambishop has quit IRC | 12:35 | |
* radiofree wonders if he could build a functional baserock system with systemd 44 | 12:36 | |
rjek | hahaha | 12:37 |
---|---|---|
rjek | hahahaha | 12:37 |
rjek | hahahahaha | 12:37 |
paulsherwood | ? | 12:38 |
rjek | radiofree: You probably could, but I doubt much in the reference definitions would play along. | 12:38 |
radiofree | how much actually depends on systemd though? | 12:38 |
radiofree | to build that is, i don't care if everything breaks, as long as i can get in and compile | 12:39 |
pedroalvarez | we disabled some things from busybox when moving to a newer systemd | 12:39 |
pedroalvarez | (networking services, etc) | 12:39 |
radiofree | any service files we added might need some changing (if they're using new features) | 12:39 |
pedroalvarez | it should be possible I guess | 12:40 |
radiofree | did there ever exist a baserock system with systemd v44? | 14:11 |
radiofree | maybe i can see just how reproducible baserock is :) | 14:12 |
*** 18VAAA24X is now known as tiagogomes | 14:12 | |
radiofree | Date: Fri Mar 16 01:57:47 2012 +0100 | 14:13 |
* radiofree can't remember if baserock is that old | 14:13 | |
rjek | Ish | 14:13 |
rjek | Very early days | 14:13 |
radiofree | ooh it is | 14:14 |
radiofree | initial commit was Oct 8 2011 | 14:14 |
* radiofree can't find the release tag for anything before 14 | 14:16 | |
*** jjardon has joined #baserock | 14:16 | |
radiofree | is there a document of release->sha anywhere? | 14:17 |
persia | I think just the tags. | 14:18 |
persia | Although I seem to remember mentions of releases in the commit messages for earlier releases. Maybe try inspecting the set of commits that have the systemd you want in something like tig or gitk to see if you find any hints? | 14:19 |
radiofree | this is hard | 14:20 |
radiofree | http://git.baserock.org/cgi-bin/cgit.cgi/baserock/baserock/definitions.git/commit/?id=ac1462d117e4734c0f961c7c44126ae33350283c | 14:21 |
radiofree | rm/forwardpatch doesn't exist anymore | 14:21 |
pedroalvarez | radiofree: ugh.. that definitions is too old | 14:27 |
pedroalvarez | wouldn't be easier to try a more recent one downgrading systemd? | 14:27 |
pedroalvarez | I was thinking from just before ea286f89fbc7eda7efc8245be0fcc45475ddd67c happened | 14:29 |
*** zoli__ has joined #baserock | 14:35 | |
*** jjardon_ has joined #baserock | 14:38 | |
radiofree | pedroalvarez: i tried downgrading in a recentish system, but as you can imagine it's quite a severe jump | 14:38 |
radiofree | i don't really want to spend any time on this, i'll see if i can find an ancient fedora or other system | 14:38 |
radiofree | s/jump/fall | 14:38 |
*** SotK_ has joined #baserock | 14:40 | |
*** perryl_ has joined #baserock | 14:40 | |
rjek | I know Mithrandir has been maintaining Debian packages for systemd for a long time; perhaps you could use that dgit thing of Ian Jackson's to see if it goes back that far >:) | 14:41 |
*** inara has quit IRC | 14:42 | |
*** jjardon has quit IRC | 14:42 | |
*** zoli___ has quit IRC | 14:42 | |
*** lachlanmackenzie has quit IRC | 14:42 | |
*** persia has quit IRC | 14:42 | |
*** perryl has quit IRC | 14:42 | |
*** SotK has quit IRC | 14:42 | |
*** inara has joined #baserock | 14:42 | |
*** persia has joined #baserock | 14:42 | |
*** persia has quit IRC | 14:42 | |
*** persia has joined #baserock | 14:42 | |
*** lachlanmackenzie has joined #baserock | 14:43 | |
*** De|ta has joined #baserock | 14:47 | |
*** jjardon_ is now known as jjardon | 14:48 | |
*** perryl_ is now known as perryl | 14:49 | |
pedroalvarez | ugh, I tried to build that sha1 of definitions, and stage1-gcc failed :/ | 14:53 |
* pedroalvarez stops playing | 14:56 | |
*** gary_perkins has joined #baserock | 15:04 | |
*** tiagogomes has quit IRC | 15:06 | |
*** tiagogomes_ has joined #baserock | 15:06 | |
*** mariaderidder has quit IRC | 16:12 | |
*** paulw has quit IRC | 16:15 | |
*** gary_perkins has quit IRC | 16:27 | |
*** jonathanmaw has quit IRC | 16:35 | |
*** persia_ has joined #baserock | 16:35 | |
*** Guest34969 has quit IRC | 16:46 | |
*** edcragg has quit IRC | 17:06 | |
*** mdunford has quit IRC | 17:18 | |
*** paulw has joined #baserock | 17:38 | |
paulsherwood | folks.. i've +2 and merged https://gerrit.baserock.org/#/c/991/ unilaterally since i heard on another channel that this was blocking someone. in general i believe i should have waited for another reviewer, but i think it's too late today for that | 17:51 |
*** lachlanmackenzie has quit IRC | 17:51 | |
paulsherwood | (and it's a one-line change, by a frequent contributor) | 17:52 |
paulsherwood | please let me know if folks think this kind of shortcut should never be taken, rather than just discouraged | 17:54 |
nowster | paulsherwood: agreed with that change | 17:54 |
paulsherwood | nowster: tvm | 17:54 |
nowster | ownership shouldn't go into tarballs, but it happens | 17:54 |
nowster | certainly not for source distribution tarballs | 17:56 |
persia | We've a history of doing that sort of thing, but generally we've discussed it on this channel *before* doing the merge. | 17:57 |
persia | I'm in favour of simply discouraging the shortcut, rather than prohibiting it, but do think it best to communicate before, rather than after, just in case anyone has concerns, especially if you hear things "in another channel" | 17:58 |
paulsherwood | persia: fair. that occurred to me after i'd hit the button. i'm still impulsive at times | 18:00 |
paulsherwood | persia: incidentally, what did you think of the ybd gang numbers? | 18:05 |
persia | I liked "herd" better, but in the abscence of comparisons over differemt environments, the numbers were somewhat meaningless to me. | 18:06 |
persia | That running 10 jobs in a single instance, rather than one, improves things, tells me that the tool is not taking advantage of available parallelism very well. | 18:08 |
persia | It would be more interesting to compare running 10 instances on 40 cores to running 10 independent images, each with 4 cores, with one instance each. | 18:08 |
persia | That helps understand the balance between storage contention and cpu/memory contention | 18:09 |
*** paulw has quit IRC | 18:09 | |
paulsherwood | 'not taking advantage of available parallelism very well'... lots of parts of the work are not paralellisable... eg creating artifact, configure-commands, install-commands | 18:18 |
paulsherwood | at least, there were reliability problems when ybd parallelised configure... morph doesn't parallelise that either iiuc | 18:18 |
paulsherwood | (so i think this is not the tool, but the intrinsic nature of the workload) | 18:20 |
paulsherwood | i don't understand what you mean by 19:08 < persia> It would be more interesting to compare running 10 instances on 40 cores to running 10 independent images, each with 4 cores, with one instance each. | 18:21 |
paulsherwood | what is an 'image' in this context? | 18:21 |
* paulsherwood did compare different environments... macbookcpro vs AWS... but could try others if persia has something specific in mind | 18:22 | |
persia | Borrowed from SSI vs. MSI terminology: a large computer system can be "single system image", which runs one OS, or "multuple system image", running several OSes. The first is the classic minicomputer architecture, and the second more of an MPI architecture. | 18:23 |
persia | That was for "image" | 18:23 |
persia | Anyway, it would be interesting to me to see if there were differences running many processes in the same system vs. running many processes in many systems, for the same total core count/memory, as this would tell if there was an IO issue. | 18:24 |
paulsherwood | but that introduces the extra variable of connectivity between the systems? | 18:24 |
persia | The macbook vs. AWS umbers aren't interseting because 1) not everyone has a macbook, and this isn't likely to be shared build infrastructure for a team, and 2) AWS infrastructure is unreliable: aside from the limited metrics involved in sizing, it is almost impossible to understand the topology of system interconnect within a system, or the nature of the available storage bandwidth (and even if one can determine these, they tend to change on the | 18:25 |
persia | next reservation, due to heterogeneity of Amazon's infrastructure) | 18:25 |
persia | Build times on unconnected systems are only interesting within a system | 18:26 |
persia | And for that matter, unless we comprehend the limiting factor for a given system, optimisation may not help the general case. | 18:26 |
persia | Yes, I' telling you why this is hard, unhelpfully. Sorry about that. | 18:26 |
paulsherwood | heh. i disagree with your reasoning | 18:26 |
persia | More helpfully, I think that 1) it makes sense to understand the nature of the parallelism exposed by running a herd. | 18:26 |
persia | And 2) I think it makes sense to try to determine a common sort of environment that would be a build server, for optimisation against. | 18:27 |
persia | You disagree with my reasoning about why I find the numbers uninteresting? I'd be delighted if you could convince me they were interesting. | 18:28 |
paulsherwood | no, you can choose what you find interesting, of course. i disagree with some of the conclusions you're drawing | 18:29 |
persia | Help me understand the distinction you're making | 18:30 |
paulsherwood | i believe that the AWS infrastructure i chose is a reasonable example of a 'cloud system' and that the improvement by the gang approach is signficant. i would expect to see similar results on other systems with many cores, modulo their relative io vs cores vs memory capabilities | 18:31 |
persia | Ignoring the topologies, yes, running a herd is better than running an instance. | 18:32 |
persia | This suggests there is parallelism in the workload that isn't being exploited except by herding. | 18:32 |
persia | I submit that it would be interesting to understand this parallelism and exploit it, rather than just throwing lots of hardware at it and depending on a random number generator to guess about parallelism. | 18:33 |
paulsherwood | yes, agreed. i hoped that the scenarios i chose (and the logs) gave enough info to begin to understand it | 18:34 |
persia | And while I agree that AWS is a reasonable example of a "cloud system", I don't believe that 1) it makes sense to schedule large computation loads on "cloud infrastructure" (better to schedule many small ones in an elastic manner), or 2) that it is a sensible optimisation target simply because the variables won't be constant. | 18:34 |
persia | As I understand it, it's mostly a matter of running more of the builds in parallel. | 18:35 |
persia | The current model seems to have build dependencies on a strata level, which is confusing. | 18:35 |
persia | I don't like packages, because it means the deployed systems are unreliable, but I do rather like the notation by which packages are defined, as I think it provides better guidance to build automation of what to build when, and how. | 18:35 |
persia | (to be clear, the specific bit I don't like about packages is that there is post-install integration logic that is executed on a per-install basis, opening the possibility for two installs to differ slightly, even when performed from the same sequence for an intended identical system) | 18:37 |
paulsherwood | but anyway, so far i have nothing to compare this with... the only published build-times i can find for for baserock prior to this are single machines running morph | 18:37 |
paulsherwood | regarding the model, i agree with you and would like to improve on it. i'm hoping that the work sssssssssssam is doing will lead to a mechanism which allows us to transform the model much more easily than currently | 18:38 |
paulsherwood | (as a result i'm holding back on suggesting improvements to definitions format for the moment) | 18:41 |
persia | Indeed. To a certain degree, there is an elegance in strata vs. control files or spec files, but there is also some inflexibility (as one would expect from a name based on stone) | 18:41 |
paulsherwood | lol | 18:41 |
*** paulw has joined #baserock | 20:12 | |
*** zoli__ has quit IRC | 23:26 | |
*** zoli__ has joined #baserock | 23:27 | |
*** zoli__ has quit IRC | 23:29 | |
*** zoli__ has joined #baserock | 23:44 | |
*** zoli__ has quit IRC | 23:46 | |
*** zoli__ has joined #baserock | 23:54 | |
*** zoli__ has quit IRC | 23:56 | |
*** zoli__ has joined #baserock | 23:57 | |
*** zoli__ has quit IRC | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!