*** cs_shadow has quit IRC | 01:04 | |
*** tristan has joined #buildstream | 03:09 | |
*** leopi has joined #buildstream | 04:13 | |
*** leopi has quit IRC | 05:08 | |
*** leopi has joined #buildstream | 05:47 | |
*** tristan has quit IRC | 06:15 | |
*** tristan has joined #buildstream | 06:37 | |
*** noisecell has joined #buildstream | 06:49 | |
*** tristan has quit IRC | 07:14 | |
*** tristan has joined #buildstream | 07:45 | |
qinusty | Can someone review https://gitlab.com/BuildStream/buildstream/merge_requests/627 if you've got a few minutes to spare | 08:08 |
---|---|---|
*** phildawson has joined #buildstream | 08:08 | |
tristan | qinusty, I'll take a look | 08:11 |
tristan | valentind, I'm hoping to roll out a 1.1.6 today, do you think we'll be able to include the deterministic staging fixes within the next few hours ? | 08:12 |
qinusty | cheers, I feel like we might want a dedicated docs page for Configurable Warnings once we start adding more core warnings. But for now they're in format_project.rst | 08:12 |
jjardon | hi, anyone around to review https://gitlab.com/BuildStream/buildstream-docker-images/merge_requests/56/? all docker images will be broken until we merge that | 08:16 |
*** bochecha has joined #buildstream | 08:16 | |
tristan | jjardon, I asked Chandan to look at it in my email since he maintains that | 08:19 |
tristan | jjardon, I think maybe a very much smaller merge request which *only* fixes the ruamel.yaml problem might have a better chance of getting fast tracked | 08:19 |
tristan | qinusty, format_project.rst sounds like exactly the right place | 08:20 |
tristan | qinusty, at least for documenting the functionality... indeed capturing all the individual warnings themselves is... different | 08:20 |
valentind | tristan, I think I can. I fixed some stuffs. Let me check what is remaining. | 08:23 |
tristan | valentind, great :) | 08:32 |
valentind | tristan, Can we merge !565 also? | 08:33 |
tristan | oh nice... lets look at that ! | 08:34 |
valentind | I think we forgot about it. | 08:34 |
* tristan pushes it on the stack temporarily... to pop back down to fatal warnings after | 08:34 | |
tristan | valentind, ensuring we dont forget about it means: Add it to 1.2 milestone, add the Urgent label... that should have been done by one of us, to issue 498 | 08:36 |
tristan | but we let it slip | 08:36 |
tristan | valentind, gah, I have to nag you that you have a tendency to not restructure surrounding code when making changes | 08:37 |
tristan | valentind, why are we passing os.environ() to run_bwrap() ? | 08:38 |
tristan | instead of removing the environment arg there ? | 08:38 |
tristan | valentind, these things add up, and overtime the hedges get overgrown with dead code of all kinds | 08:38 |
valentind | tristan, Oh ok. | 08:39 |
valentind | OK, I can clean that up. | 08:39 |
*** noisecell has quit IRC | 08:39 | |
qinusty | cheers tristan, when you mention keeping hold of the node with the provenance assosciated with it. Do you mean the root node, or just the `fatal-warnings` node? Since the fatal warnings node is bool or list, it can't make use of _yaml.node_get_provenance() | 08:43 |
tristan | qinusty, I see the conundrum | 08:44 |
tristan | qinusty, I think there are two more important conundrums to think about | 08:44 |
tristan | well, one of them is probably API | 08:45 |
tristan | The other is more serious, lets ask that question first | 08:45 |
tristan | qinusty, Does `fatal-warnings` apply only to the project where it is declared ? Or, is it important to allow a depending project to dictate how to behave with warnings issued by processing related to elements in subprojects ? | 08:46 |
qinusty | I'd argue that the root project decides which it wants fatal | 08:47 |
tristan | That's not how you implemented it, I think :) | 08:47 |
tristan | However, they both have different properties, and I think I like the way it is too | 08:48 |
qinusty | I agree, I hadn't thought about it until now | 08:48 |
tristan | I.e... if I'm gnome-build-meta and I want to make some of my warnings fatal, do I want my build to abort when I use a new version of freedesktop-sdk ? | 08:49 |
tristan | I sort of think that no, I rather want to send the freedesktop-sdk maintainers an email on their list, and ask them to consider making some specific warnings fatal | 08:49 |
qinusty | That makes sense too, I guess it comes down to how much control you want over subprojects. | 08:50 |
tristan | but the guarantees that freedesktop-sdk provides me, is their business | 08:50 |
tristan | in that light, I think you did the right thing | 08:50 |
tristan | The point of subprojects is all about controlling what you want to depend on, not controlling the subprojects themselves; however, it is still an interesting question | 08:51 |
tristan | jjardon, I wonder if you have 2 cents to throw at this ? | 08:51 |
tristan | qinusty, aside from that, the other point I had in my mind was mostly about going to such an extreme length to validate that the project only uses valid warning tokens | 08:52 |
tristan | qinusty, For instance... lets say that you use a bzr source in your project... and you declare a warning from the bzr project to be fatal | 08:53 |
tristan | Eventually, you migrate that to a tarball | 08:53 |
tristan | And you get an exploding target because bzr is not loaded in your pipeline anymore | 08:53 |
qinusty | That was a decision which I am not too defensive about, we can take it or leave it *shrugs*. I'd argue that if you migrate you should clean up your project.conf | 08:53 |
qinusty | Leaving fatal-warnings when you don't need them will just bite you further down the line when you add something which does throw that warning | 08:54 |
jjardon | tristan: agreed with what you say there; do not assume you can change anything from freedesktop-sdk if you are a gnome-build-meta dev | 08:54 |
tristan | jjardon, yeah I think so too | 08:54 |
tristan | qinusty, in that case, I think we can remove the whole debate about the node parsing and all that | 08:55 |
qinusty | Because you don't need provenance if you don't validate | 08:55 |
tristan | qinusty, and we can remove the added API to plugin.py, etc | 08:56 |
jjardon | when s isomeone working in a specific problem in gnome-build-meta he/she migth have no time/do not want to fix issues with freedesktop-sdk | 08:56 |
qinusty | How so tristan? | 08:56 |
*** jonathanmaw has joined #buildstream | 08:56 | |
tristan | qinusty, because it's all about reporting a list of what valid tokens exist | 08:56 |
tristan | so that we can... validate them | 08:57 |
tristan | qinusty, and maybe... ensure they are unique ? or, is the prefixing with the plugin kind not enough for this ? | 08:57 |
qinusty | I had assumed that prefixing with kind was enough, | 08:57 |
tristan | qinusty, I mean we can remove the whole 'register_warnings' and 'get_warnings()' stuff | 08:58 |
qinusty | Yeah I get that now | 08:58 |
tristan | :) | 08:58 |
qinusty | Uniqueness though, it is prefixed with the plugin name | 08:58 |
tristan | So, we do validate a *lot* of stuff, and that's great, but I think this is the right balance | 08:58 |
qinusty | s/name/kind | 08:58 |
qinusty | And kinds can't include colons. | 08:59 |
tristan | We do not validate public data, as that's a dynamic thing, people can annotate what they want and then read that back from dependencies | 08:59 |
tristan | right | 08:59 |
qinusty | So I /think/ that it's fairly unique | 09:00 |
tristan | qinusty, You will have to sort out the exception of the base classes | 09:00 |
qinusty | element and plugin? | 09:00 |
tristan | qinusty, I think it makes sense that when self.warn() is called from core base classes, we don't use any prefix | 09:00 |
qinusty | s/plugin/source/ | 09:00 |
tristan | Plugin,Element,Source... and probably also BuildElement and ScriptElement (should they ever) | 09:01 |
tristan | qinusty, one approach for that is to add a hidden, private keyword argument to Plugin.warn() | 09:01 |
qinusty | so checking isinstance() for each of those? | 09:01 |
tristan | informing it that it is called from "core" | 09:01 |
qinusty | Um | 09:01 |
tristan | No no... never that please :) | 09:01 |
qinusty | Exactly | 09:01 |
tristan | So we could call it `_core` and omit it from the docs | 09:02 |
qinusty | Sounds good | 09:02 |
tristan | if the docs render showing the private `_core` param, it means we just need to override the signature (by placing the signature as the first line in the docstring) | 09:02 |
tristan | there are some precedents for that already | 09:03 |
tristan | well, mostly on constructors though; e.g.: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/element.py#L155 | 09:04 |
tristan | qinusty, that line ensures we dont spam the docs with the arguments of the Element constructor | 09:05 |
tristan | qinusty, here is a better example: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/element.py#L397 | 09:05 |
tristan | that function technically should prefix it's private keyword arguments with an underscore, but you get the idea | 09:06 |
jjardon | Hi, just coming around to celebrate: https://lists.freedesktop.org/archives/flatpak/2018-August/001233.html :) Thanks buildstream team to build a tool that allow us make that project possible; keep the good work! | 09:06 |
* tristan breaks out the champaign ! | 09:07 | |
valentind | tristan, I see that element plugins have an artifact version. But not source plugins. Where do you want me to bump the artifact version? | 09:08 |
phildawson | jjardon :D | 09:09 |
tristan | valentind, once in _versions.py | 09:09 |
valentind | OK | 09:09 |
tristan | valentind, however indeed, I've been thinking about that exact limitation this week in the back of my mind | 09:09 |
tristan | it deserves an issue, but not really a priority right now I think | 09:10 |
tristan | improving that is a part of hardening the plugin loading contract / API | 09:10 |
qinusty | I'm going to 14 valentind ;) | 09:11 |
valentind | tristan, I went through most your comments. I am running tests before pushing. But there is one comment that I did not understand. Where you asked "Did we forget directories in the zip plugin ?" | 09:12 |
tristan | valentind, directories have permission bits right ? | 09:12 |
tristan | I have the worst experiences with non-executable directories | 09:12 |
valentind | You mean about the order? I should go topdown? | 09:13 |
tristan | valentind, I was under the impression the directories were captured everywhere else, but maybe I'm wrong ? | 09:13 |
tristan | valentind, no I don't really mean that; *and* I could be wrong, but I made that comment when looking at the code: It looked like you are not ensuring deterministic mode bits on the created directories | 09:14 |
valentind | Oh, but it was symlinks. | 09:15 |
valentind | The case for directories is after. | 09:16 |
tristan | Ok | 09:16 |
valentind | tristan, Also it is tested. | 09:17 |
tristan | valentind, sec... looking a couple lines after that comment... it says basically "if (directory) chmod(executable) else chmod(nonexecutable)" | 09:17 |
tristan | valentind, does a zip file really not specify at least if a file is executable or not ? | 09:17 |
valentind | I do not think it is possible. | 09:18 |
valentind | zipfile module does not seem to have any call to os.chmod. Let me check again. | 09:18 |
tristan | Ok, so from some googling, Yes: You can archive and restore permissions, timestamps and such in zip files | 09:20 |
tristan | But, it's only supported on "unix" | 09:20 |
valentind | OK, I was wrong. I will fix that. | 09:20 |
valentind | I tried to create zip files but it did not take it. | 09:20 |
tristan | valentind, No no... I think in this case you have it better | 09:21 |
valentind | Also the tests pass. So maybe we need to make something for the zip source in the tests to preserve permissions | 09:21 |
valentind | OK | 09:21 |
tristan | it's an exotic feature for zip, and then having zip try to behave differently when, for example; building a linux system from a windows host or such... sounds like a nightmare | 09:21 |
tristan | valentind, if you can add a mention to the zip plugin docs near the top, probably an `.. attention::` block or such, would be good to specify that limitation | 09:22 |
*** toscalix has joined #buildstream | 09:22 | |
valentind | tristan, Python's zipfile does not seem to support it anyway. | 09:23 |
tristan | valentind, yeah... still we should document that only a limited set of permissions are supported by BuildStream's zip plugin, it that plugins documentation | 09:24 |
tristan | s/it/in | 09:28 |
qinusty | tristan, would you argue core warnings are raised within our buildstream plugins? | 09:33 |
qinusty | re: _core param, if you consider ref-not-in-track core because it can be used by various plugins then you have to consider all CoreWarnings before prefixing, rather than where the warning came from | 09:34 |
*** toscalix has quit IRC | 09:41 | |
tristan | qinusty, ok so, here is the interesting part :) | 09:44 |
tristan | qinusty, I think that we can avoid the _core param altogether if we have a CoreWarnings class, and indeed it seems that there are going to be cases where the core declares one (like ref-not-in-track), and an external plugin needs to raise it | 09:45 |
tristan | qinusty, in that case, we can do minimal validation on the unprefixed warnings | 09:46 |
qinusty | So if the warning raised exists within CoreWarnings, don't prefix it | 09:46 |
tristan | qinusty, by checking the CoreWarnings | 09:46 |
tristan | Not exactly, I would use identity comparison rather than equality comparison | 09:46 |
tristan | (starting to catch on to python jargon) | 09:47 |
skullman | In a turn of events that should surprise exactly nobody, the setuptools-scm python package (which checks version control for what version to call the package) requires git history to build from git. | 09:47 |
Kinnison | oops | 09:48 |
tristan | qinusty, i.e. we can check, within `Plugin.warn()`... if the passed string "is" a CoreWarning.WARNING_NAME; rather than if it "==" one | 09:48 |
qinusty | Errrrr | 09:48 |
qinusty | But then we're checking for each CoreWarning | 09:48 |
*** cs_shadow has joined #buildstream | 09:48 | |
Kinnison | skullman: Is this a case where SourceTransform's pip plugin might be of use? | 09:48 |
tristan | qinusty, well, we can do it in an any(generator expression) probably (hopefully), but lets say that we have a ridiculously large number of core warnings | 09:49 |
tristan | qinusty, like, 500 of them... How often do we issue a warning ? | 09:49 |
qinusty | Rarely | 09:50 |
tristan | qinusty, well; we could have an either/or parameter, instead of an accompanying boolean; to avoid that overhead | 09:50 |
skullman | Kinnison: I don't know. I'm not up to speed on what SourceTransform is up to. I can probably just link to the pypi tarball. | 09:51 |
tristan | skullman, setuptools-scm will, like most makefiles and projects (like ostree); *only* be useful for the relevant history in the tree | 09:52 |
qinusty | Okay, easy way to get a list of constants within a class? | 09:52 |
*** mohan43u has quit IRC | 09:52 | |
tristan | skullman, i.e. I believe that it falls into the same category as other things which should be solvable with https://gitlab.com/BuildStream/buildstream/issues/487 | 09:52 |
tristan | qinusty, I think I'm fine with either approach, the alternative is also nice for other reasons | 09:53 |
tristan | qinusty, consider Plugin.warn(warning_text, *, core_warning_token=None, warning_token=None) | 09:54 |
qinusty | I agree, but getting a list of constants is required for that. I could scrape __dict__ like I did before, but that doens't get you the identity | 09:54 |
tristan | qinusty, that has the benefit of possibly eventually strongly typing the core_warning_token | 09:54 |
qinusty | Having separate params for core and plugin warnings just feels a little extreme | 09:54 |
skullman | tristan: I assuming such, yeah, I just don't think I've time to take out of my current task to unblock it by implementing #487 if a tarball works. | 09:55 |
*** mohan43u has joined #buildstream | 09:55 | |
tristan | skullman, yup, not arguing that :) | 09:55 |
tristan | qinusty, I leave it in your hands then, I think both approaches have their merit :) | 09:56 |
tristan | qinusty, I'm sure that an identity check is possible with an Object that we control, hopefully without touching exactly __dict__, also python's Enum data type might be helpful in this regard | 09:57 |
qinusty | I'm trying to avoid enum because then I need to use .value in some places and not others :P | 09:59 |
*** finn_ has joined #buildstream | 09:59 | |
*** finn has quit IRC | 10:01 | |
tristan | valentind, did you rebase this very recently: https://gitlab.com/BuildStream/buildstream/merge_requests/565 ? | 10:02 |
tristan | It hit the race condition which was fixed yesterday: https://gitlab.com/BuildStream/buildstream/-/jobs/88348855 | 10:02 |
valentind | Probably not. | 10:02 |
tristan | oh | 10:03 |
valentind | I will rebase. | 10:03 |
tristan | valentind, I was alarmed for a moment | 10:03 |
tristan | valentind, I did rebase | 10:03 |
tristan | I kind of assumed, if you changed the env thing; you *must* have rebased before pushing, since merging was the objective | 10:03 |
tristan | Ok, I am not going to make the 1.1.6 now | 10:04 |
tristan | valentind, please try to resolve and merge these things, I will make the 1.1.6 either late tonight or tomorrow morning | 10:04 |
tiagogomes | tristan if you had used a version where the fix landed, artifacts/cas/tmp/ wouldn't even exist | 10:10 |
*** tristan has quit IRC | 10:10 | |
valentind | tristan, I am done with !616 before doing rebase to both master and 1.2 (one commit does not go to master). Should I go forward? Do you want to have a quick look? | 10:11 |
valentind | During that time I will take care of setenv branch. | 10:11 |
*** noisecell has joined #buildstream | 10:46 | |
*** toscalix has joined #buildstream | 10:50 | |
*** finn_ has quit IRC | 11:20 | |
valentind | It seems that when I do Ctrl-C, then type "quit", it just continues building new tasks. | 11:54 |
valentind | This seems to have been broken recently. | 11:54 |
valentind | I think maybe 1.1.5. | 11:54 |
qinusty | I also observed and documented https://gitlab.com/BuildStream/buildstream/issues/531 | 12:01 |
jjardon | valentind: same here when we move to 1.1.5 from 1.1.3 | 12:02 |
valentind | qinusty, thanks. I think the issue I get is more #525 | 12:03 |
*** aiden has quit IRC | 12:10 | |
*** bethw has quit IRC | 12:10 | |
*** laurence has quit IRC | 12:10 | |
*** tlater has quit IRC | 12:10 | |
*** mablanch has quit IRC | 12:10 | |
*** valentind has quit IRC | 12:10 | |
*** milloni has quit IRC | 12:10 | |
*** johnward has quit IRC | 12:10 | |
*** aiden has joined #buildstream | 12:10 | |
*** tpollard has quit IRC | 12:10 | |
*** tpollard has joined #buildstream | 12:10 | |
*** noisecell has quit IRC | 12:11 | |
*** noisecell has joined #buildstream | 12:11 | |
*** mablanch has joined #buildstream | 12:12 | |
*** tpollard has quit IRC | 12:12 | |
*** tpollard has joined #buildstream | 12:12 | |
*** tlater has joined #buildstream | 12:12 | |
*** laurence has joined #buildstream | 12:13 | |
*** johnward has joined #buildstream | 12:14 | |
*** valentind has joined #buildstream | 12:15 | |
*** bethw has joined #buildstream | 12:15 | |
*** leopi has quit IRC | 12:15 | |
*** tpollard has quit IRC | 12:16 | |
qinusty | valentind, Did we have a fix for the linting errors? | 12:16 |
*** tpollard has joined #buildstream | 12:16 | |
*** milloni has joined #buildstream | 12:16 | |
*** tiagogomes has quit IRC | 12:17 | |
*** tiagogomes has joined #buildstream | 12:17 | |
*** leopi has joined #buildstream | 12:25 | |
tlater | qinusty: linting errors? | 12:27 |
tlater | Linter is never wrong! | 12:28 |
tlater | ;p | 12:28 |
*** toscalix has quit IRC | 12:28 | |
qinusty | https://gitlab.com/BuildStream/buildstream/issues/570 | 12:30 |
qinusty | The configuration is wrong? Maybe we just need to fix stuff | 12:30 |
qinusty | but it's happening to me locally | 12:30 |
qinusty | using our .pylintrc | 12:30 |
juergbi | qinusty: do you see the issues also when invoked via ./setup.py test? | 12:32 |
laurence | Nexus, I think that supporting relative workspace paths is a good one to use the 'verify' column for - https://gitlab.com/BuildStream/buildstream/issues/191 | 12:32 |
laurence | what do you think? | 12:32 |
qinusty | juergbi, I didn't used to. Until yesterdya | 12:33 |
qinusty | yesterday* But tbh, `pylint --rcfile .pylintrc buildstream/` is much faster and I'd like them both to work the same :P | 12:33 |
Nexus | laurence: tagged :) | 12:33 |
*** finn_ has joined #buildstream | 12:46 | |
tiagogomes | juergbi, are symbolic links deliberately left out when pushing to the cas? | 12:53 |
laurence | Nexus, the issue should go into verify column before being closed :) | 12:53 |
juergbi | tiagogomes: no, symlinks should work fine | 12:53 |
tiagogomes | Pardon. I actually meant pulling :) | 12:53 |
juergbi | tiagogomes: in CAS symlinks are stored embedded in the directories. what issue are you seeing? | 12:54 |
juergbi | i.e., it's impossible to pull directories without pulling the embedded symlinks | 12:54 |
tiagogomes | juergbi ah I understand now how it works. Sorry for the ping :) | 12:55 |
juergbi | nw | 12:56 |
qinusty | Is the gitlab bot still dead? | 13:13 |
tlater | qinusty: I suspect that there is some configuration specific to how we invoke pylint in setup.cfg | 13:13 |
tlater | A little annoying, but iirc there was no way to merge the two | 13:14 |
tlater | Unless you want to manually type out the full command setup.py runs, of course. | 13:14 |
qinusty | tlater note: These errors show on both ./setup.py test ... and pylint --rcfile .pylintrc buildstream/ | 13:14 |
tlater | Are you *sure*? If so, why does CI pass for anyone? | 13:15 |
tlater | A few things could have happened; pylint could have updated and added a new warning, which our CI runner images don't have yet, or your command invokes checks on files that are not usually checked. | 13:16 |
tlater | I'm not sure that optionbool was checked, perhaps jennis knows more. | 13:17 |
qinusty | https://asciinema.org/a/BfHy4QYsrzDVTluIAvfZyX1LP | 13:19 |
tlater | Cool. In that case we should really figure out what changed between your machine and our CI | 13:20 |
qinusty | Indeed | 13:20 |
qinusty | Do we know CI versions? | 13:20 |
tlater | setup.py is set to install 3.1.0 | 13:23 |
tlater | Ah, >= | 13:23 |
qinusty | pylint 3.1? | 13:23 |
tlater | So presumably it installs newer versions | 13:24 |
qinusty | python* | 13:24 |
tlater | pylint | 13:24 |
qinusty | pylint is 2.1.1 from pip | 13:24 |
tlater | Oh, nevermind, I was looking at pytest... | 13:24 |
qinusty | :D | 13:24 |
tlater | Why do these all have to start with the same prefix? ;P | 13:25 |
qinusty | pyTHON | 13:25 |
qinusty | why can't it just be thon? | 13:25 |
qinusty | thon, test, lint | 13:25 |
persia | At least two of those are actual programs, not typically implemented in python. | 13:25 |
tlater | Really, I just think adding py to every python application is a bit redundant ;) | 13:25 |
tlater | Anyway... | 13:25 |
persia | This is why there aren't ctest, clint, etc. | 13:25 |
qinusty | It's that they do when they're not creative enough | 13:25 |
qinusty | clint who? | 13:26 |
tlater | You can check what the CI is running manually, qinusty | 13:26 |
persia | qinusty: lint(1) for C. | 13:26 |
* tlater is not at a PC right now | 13:26 | |
qinusty | pylint 0.11 | 13:26 |
juergbi | I removed the pylint version restriction in setup.py a few days ago because it was too old for Python 3.7 | 13:27 |
qinusty | um | 13:27 |
qinusty | regression? | 13:27 |
tlater | Ah, that would make sense | 13:27 |
juergbi | however, CI images are already on pylint 2, afaict (i.e., beyond the previous <2 restriction) | 13:27 |
juergbi | and it works fine in CI and here locally with 2.1.1 with setup.py test | 13:27 |
juergbi | I tested locally with both Python 3.6 and 3.7 | 13:28 |
qinusty | I guess I'm 3.5 | 13:28 |
qinusty | so | 13:28 |
tlater | Could you double check what pylint version actually runs? Last passing CI was using buildstream/testsuite-fedora:27-master-102-9067e269 | 13:28 |
tlater | docker run-ing and checking pylint --version should clear this up. | 13:28 |
qinusty | test session is using plugin pylint 0.11 | 13:30 |
tlater | (thanks Tristan for convincing us of a sensible docker image versioning scheme :D) | 13:30 |
juergbi | oh, it's still 1.9.2 | 13:30 |
juergbi | qinusty: that's pytest_pylint, not pylint itself | 13:30 |
qinusty | That is what I thought | 13:30 |
qinusty | Where are you seeing the 1.9.2? | 13:30 |
juergbi | https://gitlab.com/BuildStream/buildstream-docker-images/-/jobs/79147963 | 13:30 |
juergbi | I just noticed that there were multiple pipelines for that docker-image commit | 13:31 |
tlater | juergbi: That CI ran a month ago | 13:31 |
juergbi | the one I was looking at earlier installs pylint 2.0.0, but the one that was actually pushed uses 1.9.2 | 13:31 |
tlater | Ah | 13:31 |
juergbi | tlater: yes, but check: Successfully tagged buildstream/testsuite-fedora:27-master-102-9067e269 | 13:31 |
qinusty | either way, you say you're not seeing the issue on 3.6/3.7 with pylint 2.1.1 juergbi? | 13:32 |
juergbi | I hate this random pip versioning thingy | 13:32 |
tlater | So, someone has to update the .gitlab-ci.yaml to use a newer image :) | 13:32 |
persia | pip.lock exists as a thing. | 13:33 |
tlater | qinusty: The right way to check that is to update the docker images our CI runs. Debian 9 should still run 3.5, so then we'll know. | 13:33 |
juergbi | qinusty: yes. master completely fails here now, but it was working a couple days agao | 13:34 |
tlater | Want me to walk you through that? Probably handy to spread that knowledge... | 13:34 |
qinusty | Sure? | 13:34 |
*** sweetbae has joined #buildstream | 13:35 | |
juergbi | oh, the ruamel version restriction completely breaks things here :/ | 13:36 |
tlater | qinusty: You'll want to update the tags our .gitlab-ci.yaml picks here: https://gitlab.com/BuildStream/buildstream/blob/master/.gitlab-ci.yml#L81 | 13:36 |
noisecell | juergbi, https://gitlab.com/BuildStream/buildstream/issues/535 | 13:36 |
tlater | And here: https://gitlab.com/BuildStream/buildstream/blob/master/.gitlab-ci.yml#L1 | 13:36 |
*** sweetbae has quit IRC | 13:37 | |
qinusty | Where am I getting the image from to use? tlater | 13:37 |
noisecell | same issue we were seeing in definitions https://gitlab.com/baserock/definitions/-/jobs/85875464 | 13:37 |
tlater | qinusty: Second | 13:37 |
tlater | qinusty: You then check the CI runs of the buildstream-docker-images repo: https://gitlab.com/BuildStream/buildstream-docker-images/pipelines?scope=all&page=1 | 13:38 |
tlater | Pick the latest build of master | 13:38 |
tlater | Check its log (the one that's not aarch64) | 13:39 |
tlater | Ergh, someone removed the handy little print command | 13:40 |
tlater | In that case you'll have to go to the buildstream-docker-images docker hub | 13:40 |
tlater | I.e.: https://hub.docker.com/u/buildstream/ | 13:41 |
tlater | And just grab the latest tags for the relevant images | 13:41 |
tlater | Apply those, commit, push. The CI should then run with the new images :) | 13:42 |
tlater | We build new images nightly, so if something upstream changes and you want those changes in CI you'll have to go through that. | 13:43 |
*** xjuan has joined #buildstream | 13:44 | |
qinusty | juergbi did you get to the bottom of the ruamel restriction? | 13:49 |
juergbi | qinusty: changing it to < 0.15.52 works here | 13:49 |
juergbi | i.e., 0.15.51 doesn't break the test suite and also works with Python 3.7 | 13:49 |
juergbi | see #571 | 13:49 |
*** noisecell has quit IRC | 13:55 | |
qinusty | still fails linting on python 3.5 though it seems | 14:10 |
bochecha | hi, is there a way to tell buildstream that I really, really want it to rebuild an element, without getting from the cache? | 14:46 |
bochecha | I thought that was what --all was for, but it seems not | 14:47 |
persia | bochecha: From curiosity, why would you want to rebuild something that hadn't changed? | 14:49 |
bochecha | because I want to look at the build logs | 14:49 |
valentind | persia, when fixing a plugin for example. | 14:49 |
persia | valentind: Aha. That makes sense. | 14:50 |
bochecha | there's something wrong in the element, it prints an error in the logs apparently, but I don't have those logs on my machine to grep for the error message | 14:50 |
bochecha | I'd have to hunt the logs for each element from the CI, finding the job in which each was rebuilt | 14:50 |
valentind | bochecha, you can fin the logs in .cache/buildstream/logs. | 14:50 |
bochecha | valentind: no | 14:50 |
bochecha | I never built those elements on my machine before | 14:50 |
persia | For logs, I thought there was `bst log`, but apparently I'm mistaken. | 14:50 |
valentind | Ah right. | 14:50 |
bochecha | they were downloaded from the shared cache | 14:50 |
bochecha | so I want to build it fully once on my machine | 14:50 |
persia | The logs should also be recoverable from the shared cache. | 14:51 |
qinusty | Can confirm juergbi, Changing from 3.5.3 to 3.7 made the issues disappear | 14:51 |
bochecha | persia: and how do I find the right log for the right build of each element? | 14:51 |
persia | That7s why I thought there was `bst log`, so one didn't need to manually look up cache keys. | 14:52 |
persia | But, as stated before, I'm mistaken. | 14:52 |
persia | While I'm now convinced that there are reasons to be able to tell BuildStream to rebuild this thing now even when there is apparently no need (thanks valentind), I also believe there needs to be an easier way to get logs than to perform a build. | 14:53 |
persia | This is even more important in cases where the build might take a while. | 14:53 |
valentind | bochecha, I would just comment the artifact server from project.conf. Go manually to nuke the ref file in the cache. And then rebuild. | 14:53 |
persia | That process should work, but it ought be easier. | 14:54 |
bochecha | valentind: that's what I did, yes | 14:54 |
valentind | persia, I think there must be a plan for it with build artifacts. | 14:54 |
bochecha | butusually, build tools have something like a --no-cache option | 14:54 |
persia | valentind: Do you know if there is an issue number for it? | 14:54 |
valentind | I do not know. | 14:55 |
persia | bochecha: Yes: that's clearly a problem. It makes sense to report it. | 14:55 |
bochecha | also, if that's not what --all is for, then what does --all actually do? | 14:55 |
persia | bochecha: --all builds everything. | 14:55 |
bochecha | well, no, it doesn't :) | 14:55 |
persia | Let's say you have a project with elements A through G alphabetically. | 14:56 |
bochecha | it took everything from cache, here | 14:56 |
valentind | bochecha, it makes sure you have all the artifacts in the cache. | 14:56 |
persia | And you have two dependency chains, being A->B->C->D and E->F->G | 14:56 |
persia | Normally, if you build A, you'd end up building ABCD, but not EFG. | 14:56 |
persia | If you use --all, BuildStream should also build EFG, even though they aren't in the dependency chain for A. | 14:56 |
persia | For a meaning of "build" that includes "download from cache" | 14:57 |
bochecha | that's… not at all how I would have defined "build" :-/ | 14:57 |
persia | Fair enough :) Nonetheless, that's the semantic assigned, as far as I can tell from the docs. | 14:57 |
bochecha | ok… | 14:58 |
persia | To my mind, that you can tell the difference between a local build and a remote build is a bug. | 14:58 |
qinusty | Can someone look over https://gitlab.com/BuildStream/buildstream/merge_requests/627? It has been through one review already. | 14:59 |
bochecha | persia: well, when I have to build ARM through qemu/binfmt on my x86_64 laptop, I assure you that I can definitely tell the difference between a local build and a remote one :P | 15:01 |
persia | bochecha: After the build is completed? | 15:01 |
bochecha | it never completed >_< | 15:01 |
bochecha | laptop got insanely hot (so did the whole room in fact), it was taking ~4 hours for the typical configure script to run | 15:01 |
bochecha | I gave up | 15:02 |
bochecha | it takes 3 hours total for the whole freedesktop-sdk to build on the native ARM machine on the CI :) | 15:02 |
persia | Oh my. That is slow. | 15:02 |
bochecha | worst is when a configure script took 5 hours, and the subsequent "make" took 25 minutes | 15:02 |
persia | My point is that if we assert that bst caching everything is good, and assert that having public caches of common elements from CI is good, then we need to remove any difference in experience between consuming a local build and consuming a remote build. | 15:03 |
persia | I don't think we can feasibly change the experience of local builds in terms of performance, but if one can use a remote CI and still have full local access to logs, output results, and similar, one can effectively debug. | 15:03 |
bochecha | yes, I understood that, my example about ARM/qemu was a tongue-in-cheek one :) | 15:03 |
persia | Otherwise, the cache just gets in the way of development. | 15:03 |
bochecha | I do agree that removing the differences between local and remote is a worth goal | 15:04 |
persia | I think it's an excellent example. It demonstrates the importance of being able to operate effectively on remote builds for a very real use case. | 15:04 |
persia | And a key part of that is access to logs. | 15:04 |
bochecha | especially, I'd like to be able to do things like « give me the logs for the build of foo.bst » | 15:05 |
persia | You might need to use your qemu-static chroot to operate on some of the results (e.g. execute arm64 ELF on x86-64), but that's less painful than not being able to do the thing. | 15:05 |
bochecha | and it would give me the ones it finds in my local cache, corresponding to the current version of the element | 15:05 |
persia | Yes. That avoids all the local builds you're planning. | 15:05 |
bochecha | otherwise, grepping through old logs is not fun | 15:06 |
persia | I've been told those are pushed to the shared cache. I don't know how to recover them. I suspect one has to look up the cache key for the current element, etc. | 15:06 |
bochecha | maybe "bst build" could retrive the logs at the same time it retrives the artifacts? | 15:07 |
persia | The argument aganist that is that it consumes more bandwidth than is necessary for builds. | 15:09 |
persia | If there isn't an issue, you don't care if you don't have the logs, and you do care about the time it takes to get the logs. | 15:09 |
persia | If there is an issue, you need the logs: I think BuildStream should fetch missing logs when they are requested, rather than speculatively. | 15:09 |
persia | This makes it a bit slower when requesting logs, but much faster for other cache requests, and fetching the logs is still usually faster than performing a local build to get logs. | 15:10 |
*** leopi has quit IRC | 15:12 | |
*** leopi has joined #buildstream | 15:37 | |
laurence | now that it's been merged, could someone verify relative workspace paths? https://gitlab.com/BuildStream/buildstream/issues/191 | 15:55 |
laurence | Reads like tristan captured the request on the ticket but it didn't originate from him | 15:55 |
laurence | So I wonder who that originator was...and maybe we can get them to verify | 15:55 |
*** tpollard has quit IRC | 16:00 | |
*** leopi has quit IRC | 16:01 | |
tiagogomes | how do I tame the test suite such that the the coverage stats are not shown at the end? | 16:01 |
*** jonathanmaw has quit IRC | 16:46 | |
*** finn_ has quit IRC | 17:08 | |
cs_shadow | --addopts `--no-cov` ? | 17:53 |
cs_shadow | tiagogomes: sorry, should have been `--addopts '--no-cov'` | 17:53 |
*** finn has joined #buildstream | 18:00 | |
*** finn has joined #buildstream | 18:37 | |
*** xjuan has quit IRC | 19:56 | |
*** xjuan has joined #buildstream | 20:01 | |
*** xjuan has quit IRC | 20:22 | |
*** leopi has joined #buildstream | 20:36 | |
*** tristan has joined #buildstream | 21:10 | |
*** cs_shadow has quit IRC | 21:14 | |
*** tristan has quit IRC | 22:15 | |
*** leopi has quit IRC | 22:55 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!