IRC logs for #baserock for Saturday, 2015-07-18

paulsherwoodis 'callback' a very well understood term? my only experience of it is callback-hell and callback-spaghetti when investigating other folks' asynchronous code07:43
jmacsYes07:43
paulsherwoodgreat. so in a non-asynchronous situation (say a single-threaded batch program) why would one consider using callbacks?07:44
jmacsOne example is when you need to supply a method to a function rather than data07:45
jmacsSuch as a sorting function where you need to pass in a method to compare the elements in your array07:45
paulsherwoodah, ok07:45
jmacsIn a single threaded situation I can't think of any other reason.07:46
paulsherwoodbut in that case, for clarity, could/should i call the parameter 'comparison_callback' rather than 'comparison_method'?07:46
jmacsPersonally I would find both of those equally understandable07:47
paulsherwoodyour example does help for the code i'm looking at... i think that's what the author is doing. but given i didn't grok 'callback' i was mystified what the purpose of the param was07:47
paulsherwoodthanks jmacs - that's helped a lot07:47
ratmice___yeah, I don't think there is any synchronous/asynchronous specificity to the term callback, in general it just means that the caller understands what needs to happen, while the called function specifies the API through which the caller can make it so07:48
ratmice___e.g. the term is used regularly in unthreaded C, which is synchronous07:49
paulsherwoodi get that now, thanks. in the example jmacs described i think i'd favour just calling the param 'comparison' rather than 'comparision_callback' though...07:49
paulsherwoodotherwise it seems like drifting towards 'today_date' and 'flag_bool' and 'username_string' ? :)07:50
ratmice___i like comparator07:51
paulsherwoodeven better :-)07:51
ratmice___though often called 'ord'07:51
paulsherwoodreally? is that a common name?07:52
jmacsqsort just calls it "compar"; I would probably pay for the extra byte and say "compare" though.07:52
paulsherwoodheh07:52
ratmice___in some circles :), short for 'order'07:52
paulsherwoodcircles where every byte counted, once, and old habits die hard? :-)07:52
ratmice___even i'm not interested enough in computational entymology to know the answer :)07:55
paulsherwoodack :)07:57
ratmice___but even python seems to have it, https://docs.python.org/2/library/functions.html#ord07:58
ratmice___in general, I think 'ord', is just a different perspective, rather than being a function you pass in to sorting, its an attribute of the objects passed in07:59
paulsherwoodtrying to build master definitions on an aws machine... broken at stage1-gcc - any idea what's going on here? http://sprunge.us/VQPc09:26
paulsherwoodah... c++ compiler missing or inoperational :/09:31
paulsherwood15-07-18 10:14:48 [2/240/240] [stage1-gcc] Elapsed time for build 00:01:3510:15
radiofreeThat's fast!11:41
paulsherwood15-07-18 11:43:19 [239/240/240] [TOTAL] Elapsed time 01:26:37(build-system-86_64)11:44
paulsherwoodthat's on AWS m4.10xlarge11:45
radiofreecan we use that for mason?11:46
paulsherwoodhttp://sprunge.us/HYGf11:47
paulsherwoodit's not cheap :)11:47
radiofreeis "15-07-18 10:39:17 [24/240/240] [gcc] Elapsed time for assembly 00:05:40" the build time + artifact creation time?11:49
paulsherwoodyes in general but may include creating some dependencies, and includes time to clean up staging area11:50
paulsherwoodthere's a separate line for artifact creation time11:51
radiofree3 minutes to build gcc!11:51
paulsherwoodbut it's not as fast as i hoped... this is max-jobs 60 on a 40 core system11:53
radiofree3 minutes for gcc is pretty damn fast!11:55
paulsherwoodyes, ok :)11:57
paulsherwoodi'm now running a herd of 6 ybd max-jobs 10, to see if my pseudo distbuild actually has any merit11:58
radiofreethen try building a QT system!12:02
paulsherwoodok will do12:02
paulsherwood(assuming the one in master actually builds)12:02
* radiofree is reminded that we need to upgrade to qt 5.512:04
paulsherwood5 15-07-18 12:54:22 [72/240/240] [TOTAL] Elapsed time 01:04:2912:55
paulsherwoodcounter is wrong, but the herd did it in 75% of the time :)12:56
paulsherwoodhttp://sprunge.us/MjQX12:57
persiaA herd of 6 did it in 75% of the time of a single build?13:24
persiaWhat happens if you build something further from the base, where we might expect more parallelism to be extracted?13:24
* paulsherwood curses that he deleted the artifacts13:25
persiaThat's actually useful for this sort of testing.13:25
persiaThe part that I don't understand is whether there is *expected* parallelism in the given build artifacts.13:25
paulsherwood6 instances max-jobs 10 completed in 75% of the time of 1 instance max-jobs 6013:25
persiaFor build-essential, I expect almost no parallelism.13:26
paulsherwoodyup13:26
paulsherwoodthere's no 'algorithm' here... all the herd instances build build-essential13:26
paulsherwoodthat could be improved by some locking, but at the cost of increased complexity13:27
persiaSure, but I don't care about the statistics if the thing being built contains almost no parallelism.13:27
persiaIf the end artifact has many leaves, then it becomes interesting.13:27
paulsherwoodyup. i'm wondering if it's worth kicking off a build as soon as the cache-key is calculated, rather than waiting til they're all done...13:30
persiaCould you expand that a little?13:30
paulsherwoodthis may be a speedup for new users at least, where up to an hour can be spent downloading git repos13:30
persiaProbably better to allow some configuration.13:31
persiaIn some environments, it takes minutes to download and hours to build.  In other environments, the converse is true.13:31
persiaOr, if you have a way to stop jobs, you could have the download and build race.13:32
persiaAlthough such a race would crush performance of IO-limited hardware.13:32
paulsherwoodack13:32
* persia wonders if there is a way to estimate build time in a given environment13:33
paulsherwoodbefore running? some grepping of previous logs could provide a heuristic13:34
persiaSince we can estimate download time based on size and the first few seconds of download speed, it might be interesting to have heuristics to determine whether it is better to download or to build.13:34
persiaSince the build time depends on the processor speed, processor width, number of processors, IO speeds, etc. for *each* different source, I wonder what sort of data we'd need.13:35
persiaAttempting to grep old logs strikes me as fragile.13:35
persiaEspecially if those logs were not from the identical environment (including factors like moving a laptop between different network access points)13:36
paulsherwoodyup13:36
paulsherwooddo we have any statistics for elapsed time saved using distbuild networks vs single machines?13:44
persiaThere's no such thing.13:51
persiaIt entirely depends on whether the task can be parallelised, and whether that parallelism is larger than can be done on a single machine.13:51
ratmice___I suppose you could use something like avahi for moving between networks, but seems a bit excessive?13:53
ratmice___anyhow, i've had some interest in adaptive scheduling as well13:55
* paulsherwood is ashamed to admit he doesn't know what 'adaptive scheduling' is13:57
persiaratmice___: For me, the interesting question is about elasticity: given software-defined infrastructure, how should workloads be deployed to reduce bottlenecks, and how does this answer change depending on the infrastructure in question.13:59
ratmice___in general I think we can do a much better job with whole system build graphs, because e.g. for everything but generated headers you could in theory compile to .o files in parallel for every .o on the system, and just wait on the (unparallelizable linker) to complete14:00
ratmice___paulsherwood: e.g. avoiding scheduling memory hungry compilations together14:01
paulsherwoodah, ok.14:02
ratmice___I don't think we're going to get very far down this rabbit hole with make unfortunately :(14:03
paulsherwoodthis requires that a worker has some knowledge of what other workers are doing, or of available/unloaded system capacity14:03
paulsherwoodratmice___: replacing make is maybe a bit beyond our scope at this point, though? :)14:04
ratmice___*nod* :)14:04
paulsherwoodmaybe later... :-)14:04
persiaPart of the issue is the computational cost of building build graphs.  One interesting aspect of ybd is that it doesn't even try.14:05
paulsherwooddisappointingly one of this herd has failed to build glibc... http://sprunge.us/YbVS14:05
persiaAs it turns out, this works for some things, and doesn't work for others (and I don't think we know why yet).14:05
ratmice___the problem though is starting to become more apparent, e.g. ./configure for many project represents a large portion of compile time14:06
paulsherwoodtrue. and a surprising amount of time is being spent (at least on my machine) in clearing staging areas and creating artifacts14:06
persiaHow are the staging areas being cleared?  An unlink operation should be fairly lightweight.14:07
paulsherwoodshutil.remov14:07
paulsherwoodis that unnecessary? can it just unlink?14:08
paulsherwoodhttps://github.com/devcurmudgeon/ybd/blob/master/sandbox.py#L7314:08
persiaIt depends on how careful you want to be.14:11
persiaObviously, it depends on your implementation, but the docs I find for shutil.rmtree() indicate that it deletes each file in turn, then deleting directories if they are empty, until the tree is gone.14:12
paulsherwoodideally i'd have a background task that deletes staging areas as their owners release them14:12
persiaJust unlinking the top-level directory would be faster, but provide the filesystem less guidance on what is happening.14:12
paulsherwoodcould this result in the filesystem failing to release the space properly?14:13
persiaDepends on the filesystem.14:13
paulsherwoodext4, btrfs?14:13
persiaMore importantly, for filesystems that do release things correctly in that circumstance, that often happens as a background gc job, so the impact on other operations is less clear.14:14
paulsherwoodok14:14
persiaI only know that it depends on the filesystem.  I don't know for specific filesystems.  I think ext4 marks it as deleted in the journal, but doesn't actually clean up until the journal is resolved.  I believe btrfs has a background gc, but I could be wrong in both cases.14:15
paulsherwoodi think i'll leave it as-is for now, then14:15
* paulsherwood decides that the counter is not wrong, for herds. it's useful info to know how many actual builds each worker did15:04
paulsherwoodso, i've tried creating a build-system-x86_64, from a cache where base-system-x86_64 artifacts are present15:09
paulsherwoodsingle instance, max-jobs 60 http://sprunge.us/FPfJ15:10
paulsherwood00:19:3315:10
paulsherwood6 instances, max-jobs 10 http://sprunge.us/RGgH15:11
paulsherwood00:08:3615:11
* paulsherwood notices http://users.tinyonline.co.uk/gswithenbank/collnoun.htm15:24
paulsherwoodmaybe gang would be a better metaphor than herd15:25
ratmice___hah, a congress of baboons, seems apt15:27
paulsherwoodactually... i notice that baboon 5 was the one that actually finished the build 5 15-07-18 14:42:16 [26/135/240] [TOTAL] Elapsed time 00:07:0815:29
paulsherwoodit took another minute and a half for the others to realize they'd missed the boat :-)15:30
persiaThe increased difference in speed for creating more leafy things makes sense, and is nice to see.  Promising for being a means to increase the number of different systems being built simultaneously for candidate testing.15:40
paulsherwoodi think so too :)15:40
paulsherwoodit's possible to see how much racing/duplication of effort is going on by grepping the logs for 'Bah'15:41
paulsherwoodi'm pretty sure there'll be some trick to reduce the racing15:43
paulsherwoodfribidi fails to build http://sprunge.us/CPNA15:46
*** zoli__ has joined #baserock17:23
*** zoli__ has quit IRC17:32

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!