IRC logs for #trustable for Thursday, 2017-01-05

*** AlisonChaiken ( has joined #trustable05:18
*** ctbruce ( has joined #trustable08:26
*** laurenceurhegyi ( has joined #trustable08:41
paulsherwoodlaurenceurhegyi: was there a decision on opening the workgroup ml?08:53
-*- paulsherwood notices logbot has disappeared now08:53
-*- paulsherwood also notices that publically (in topic) is not actually a word08:54
laurenceurhegyiYes paulsherwood - no objections from anyone08:54
paulsherwoodlaurenceurhegyi: great. do you have the powers to 'make it so' ?08:54
laurenceurhegyiI do...hang on..08:54
laurenceurhegyiOk, done. So the archive list is public, here:
laurenceurhegyiMembership still needs to be approved, but I don't think that's an issue, is it? 08:57
* laurenceurhegyi has changed topic for #trustable to: "Trustable software discussion. See for more context. Discussions on this channel will be logged publicly in"08:57
paulsherwoodlaurenceurhegyi: membership will always need to be approved08:59
paulsherwoodand we should take more than usual steps to confirm that member applications are genuine08:59
-*- paulsherwood has seen quite a few examples of spam to lists recently (not ours)09:00
paulsherwoodlaurenceurhegyi: should feature the studygroup list09:01
*** mdunford ( has joined #trustable09:03
laurenceurhegyiRighto. I'll add that now. Re membership: currently there are two steps: folk need their email address confirmed, then they need to be approved by a moderator. 09:05
laurenceurhegyiI think they were the only options available on mailman, but I'm having a look now.09:05
paulsherwoodlaurenceurhegyi: that's a fine process, assuming the moderator actually considers/checks each application09:06
paulsherwoodlaurenceurhegyi: i think it's a flag in the config of the list itself09:07
laurenceurhegyipaulsherwood: OK, great. That's been happening so far. 09:08
laurenceurhegyipaulsherwood: I think I may need to be a List Admin for trustable in order to add a list here:
paulsherwoodlaurenceurhegyi: i don't know how that works, sorry - pls check with dp09:12
*** brlogger (~supybot@ has joined #trustable09:13
laurenceurhegyipaulsherwood: OK09:14
*** toscalix (~toscalix@ has joined #trustable09:34
*** ChrisPolin ( has joined #trustable09:35
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)09:51
*** brlogger (~supybot@ has joined #trustable10:18
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)10:22
*** brlogger (~supybot@ has joined #trustable10:22
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)10:42
*** brlogger (~supybot@ has joined #trustable10:43
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)10:43
*** brlogger (~supybot@ has joined #trustable10:44
*** brlogger (~supybot@ has quit (Remote host closed the connection)10:44
*** brlogger (~supybot@ has joined #trustable10:44
*** Chris_Polin (~ChrisPoli@ has joined #trustable10:55
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)11:05
*** brlogger (~supybot@ has joined #trustable11:06
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)11:19
*** brlogger (~supybot@ has joined #trustable11:21
*** brlogger (~supybot@ has quit (Remote host closed the connection)11:26
*** brlogger (~supybot@ has joined #trustable11:26
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)11:36
*** brlogger (~supybot@ has joined #trustable11:38
persiaChris_Polin: Thanks for posting the diagram from our last discussion at
paulsherwoodlaurenceurhegyi: pls could you announce the public study group ml on trustable-software@, once is correct11:41
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)11:41
persiaThere are a few things about the text that confuse me a bit.  The first one is in the description of the submission, where the revision is being tracked separately from the commit message.  In my mind, the patch contains the commit message.  Were you thinking something different?11:41
ChrisPolinI hadn't considered them as separate, but I can see how it reads that way.11:44
laurenceurhegyipaulsherwood: will do both today, at some point, sure. 11:44
persiaAh, good.  If it's just a gloss issue, then that's easy enough to fix.11:44
persiaThe next part that confused me was the suggestion that reviewers would be sending artifacts to the Patch Tracker.  What sort of artifacts?11:45
persiaFor most of the patch trackers I know, reviewers only submit comments and votes (although some comments include links to supplementary data, for example, the log of a test run, or a live environment that has been deployed with the patch for ease of testing)11:46
ChrisPolinHmmm, I had included artifacts in there for completeness, but yes it's probably not necessary.11:49
persiaOr maybe we need to indicate another component that stores test results, etc.?11:49
persiaI'm not expecting humans to upload many artifacts (although maybe logs from manual testing, etc.), but I expect robot reviewers to upload a number of things, especially to justify negative reviews.11:50
ChrisPolinI think that falls into Repositories, although more detail about it should be in the text.11:51
ChrisPolinAh you mean pre-merge.11:51
persiaI think of Repositories as just being git repos, perhaps augmented with review metadata (e.g. NoteDB).  I hadn't imagined the tests being there.11:52
persiaYes, at review time.  Consider the case where a developer submits something that is rejected by a robot. The robot needs to somehow explain to the developer why the robot has given a negative vote.  I don't imagine that most test robots will have sufficient natural language engines to do this interactively, so posting of test logs seems the normal way.11:53
ChrisPolinPerhaps the 'Reviews' interface needs amended then.11:53
persiaBut I'm not sure whether the "Trustable" requirements should be that there exists some mechanism for the robot to report things in the patch tracker (in some unspecified way), or whether "Trustable" should mandate a separate hosting component for the artifacts, with the patch tracker only containing links.11:54
ChrisPolinThe pre-merge test logs would in effect be the robot 'reviews', in the same vein as the human Reviewer's reviews.11:54
persiaYes.  I think we're discussing the "Reviews" interface, and whether it is comments/votes or comments/votes/artifacts.11:54
ChrisPolinI think the latter, if test logs are included.11:55
persiaI imagine that a robot should be able to provide comments and/or votes in the same way as a human reviewer, with the logs being more about providing justification/explanation for the robot comments/votes to the developer.11:56
ChrisPolinI agree.11:56
persiaOK.  I'm fine with that.  I think the text currently separates the robot and human reviewers too much, even though they use the same interface, and that the inclusion of "artifacts" in the human sentence is a bit confusing.11:57
ChrisPolinNoted. I'll change it in the text, and also the 'Reviews' interface to something more explicit, like 'comments/votes/artifacts'11:58
persiaI also prefer "comments/votes/artifacts" to "comments/suggestions/artifacts".  Suggestions aren't mandatory, and comments may include suggestions, but I think votes are mandatory, and an important way of tracking who approved the patch for merging.11:58
persiaMoving on: this may only be an implementation detail, but I imagine the automerger consuming patch status whenever anything changes, but only taking action once the appropriate criteria have been met (in terms of number of positive votes from various roles, absence of negative votes, etc.)11:59
persiaThe current text suggests that it only gets processed once the review is "passed".  But maybe I'm reading too much into it.12:00
ChrisPolinI can clarify that, your interpretation is a better way to put it.12:01
persiaRegarding the automerger, the text indicates that further testing will only be performed if necessary.  I believe that once review criteria are met, it should be mandatory to perform another round of automated testing at merge time, just in case the target repository has changed since the last test (due to the time taken in reviews).  This might be as simple as "does this patch still apply, or will it need to be rebased", or more complex, depending 12:06
persiaon the project needs.12:06
ChrisPolinMm, that could have been clearer. My reasoning for inserting 'if necessary' was that amendments to standards or requirements are also included under 'revisions'.12:07
persiaYes, but those still need to be tested to make sure they apply to the current environment.12:08
ChrisPolinBut yes, I hadn't considered that.12:08
persiaThe simple case is when a requirements change needs rebasing because it took too long to review.12:08
ChrisPolinNoted, will fix that.12:08
persiaThe less simple case is when a standards change would put the entire system out of compliance: that change shouldn't be merged until the system can be shown to be in compliance.12:08
persiaMost of the standards I've read have timeframes by which changes must be implemented, or formal revision numbers so that one can indicate at what revision one is compliant.12:09
persiaNext: the phrase "passing test results are automatically merged into the Repositories" confused me, but I think it was related to our earlier discussion about artifacts.12:12
persiaIn my mind, all the tests go into the repos.  Perhaps we need the Automerger to use the Reviews interface?12:13
persiaAlternately, perhaps the "Automerger" and "Continuous Integration" components should be the same system?12:13
persiaWait, I said that backwards :(12:14
persiaIn my mind, none of the tests go into the repos.12:14
ChrisPolinIt is. This might be an implementation detail again, but I feel it's important to emphasise that the commit messages (with requirements/standard compliance details), developer identity, and test result information carries through to the documentation. To my mind, that's the traceability that trustable requires.12:15
persiaSo, repos contain the code (and maybe review metadata, if something like NoteDB is used, or links to review metadata, if something like gerrit's ChangeID is used), and the Patch Tracker contains the test results (perhaps housed outside the main tracking system, and only linked, but that's an implementation detail).12:15
persiaI think the implementation becomes critical here.12:16
ChrisPolinI agree, I think it does too.12:16
persiaIf we consider arbitrary VCS, arbitrary CI, and arbitrary automerger, we can only mandate that the info exists.12:16
persiaIf we assert e.g. git, then we can assert additional information about the info.12:16
persiaFor instance, if we assert git+NoteDB, then we can be sure that we get all the review information (although not all the test results).12:17
ChrisPolinThe crucial implementation detail that makes this work to me is OpenControl.12:18
persiaNote that I still don't think we want the actual test results in the repo anywhere.  If the review information informs us what revision of the tests were applied, and to what code those tests were applied, and an assertion of passing, that should be enough that we can verify the test environment's behaviour later and recreate logs.12:18
persiaHrm?  How does OpenControl make this work?12:18
ChrisPolinOpenControl (and compliance masonry) automatically creates documentation which references a project-specific yaml file against known standards.12:19
persiaMy understanding of OpenControl is that it consumes lots of information about the code, what bits of code affect what requirements/standards, etc., and then produces nice documentation with ComplianceMasonry.  Have I missed something?12:20
ChrisPolinSo if the project-specific yaml is automatically populated with the information produced during the trustable workflow (namely developer details, commit messages, and to a lesser extent test results), then an auditor can trace from the documentation right back to the source code, with all of that information.12:22
jmacsFor the avoidance of doubt, OpenControl at the moment doesn't look at code at all; it only uses manually written assurances about the code.12:22
persiaRight, so I consider ComplianceMasonry to be at a higher level than the document we're currently writing.  I would expect that some of the "tests" being run by CI would include generation of compliance information with ComplianceMasonry, perhaps followed by some automated review of that output to determine review votes.12:24
persiaI also would expect ComplianceMasonry to be run as part of the CD pipelines to ensure that any deployed system had current compliance information on file.12:24
persiaI don't see how this helps mandate what information happens to live in which component in the infrastructure supporting a trustable system.12:25
persia(or maybe "lower-level", depending on how one thinks about things)12:26
ChrisPolinhaha higher-level threw me.12:26
ChrisPolinI agree, it's implementation rather than concept.12:27
persiaIn my mind, "higher-level" was based on the idea that given some infrastructure, someone would then add ComplianceMasonry on top to generate specific SSPs.  The other way of thinking is that given some goal (e.g. CI), one would implement it with ComplianceMasonry as a subcomponent.12:27
persiaBut, we're discussing that implementation might become important when we're considering what information is in the repos.12:28
persiaIn the case where we don't specify any implementation, I think the most we can assert is that the repos contain at least pointers to review information in the patch tracker.12:28
persiaIf we want to specify specific implementations, then we can assert increasing amounts of information in the repos.12:29
persiaBut I think we're mostly talking about VCS implementations, patch tracker backing stores, CI interfaces, etc. for that.12:29
persiaI think OpenControl is useful independent of the implementation details for the repositories.12:29
ChrisPolinI think I keep coming back to implementation, simply because without it I can't see how this differs from the usual way that VCS, patch review, CI and CD are used.12:32
ChrisPolinBut I think I'm answering a different question. In a conceptual sense, what you're saying is correct.12:34
persiaI don't think we differ much from best practices for VCS, review, CI, and CD.  I think our model differs from a number of rollouts that aren't quite following best practices, but only in ways that those projects consider unimportant.12:36
ChrisPolinIn which case, isn't the implementation critical?12:37
persiaThat said, I think that in order to be "trustable", we care more about three things than most folk using this sort of tooling.  Those things being A) auditability of patch provenance, patch approvals, etc.; B) Machine-readable requirements/standards; and C) Delivery of sufficient metadata during rollout to support compliance review12:38
persiaSo, I see the goal to be to document a set of (machine-readable) requirements that a system must meet in order for that system to be considered "trustable".12:39
ChrisPolinI think, without mentioning specific tooling, the workflow needs to be guided with implementation in mind.12:40
persiaThe more that these requirements can allow alternate implementations, the easier it is to ensure that the requirements do not need to change over time, so that trustable systems now and trustable systems in a decade may use nearly the same requirements.12:40
persiaThat said, if we need specific information to be included in specific places, that means that some implementations cannot be compliant.12:40
persiaAs a result, it makes sense to consider the specific details we need in light of current (or possible future) implementations, to determine if we're being unreasonable.12:41
persiaAs an example of the sort of question that limits the potential implementation: review metadata may be referenced in three ways.  Do we care which one is used?12:42
persiaThe three ways are A) stored in the repo with the code, B) stored in the patch tracker, with links to the patch tracker in each revision of the code in the repo, C) stored in the patch tracker, with links to the code revisions in the patch tracker12:43
persiaFor all of these mechanisms, one can automate collection of information in order to generate e.g. ComplianceMasonry-compatible YAML.12:43
persia*but* maybe we consider one or another of these implementations to be untrustable.12:44
persiaSo, in my experience, (C) is untrustable.  I say this because I've worked in projects with (C) before, and have edited the pointer to code for a given issue when either A) I entered the wrong number to start, or B) the code didn't actually fix the issue, and I needed to add another patch.12:45
persiaAs a result, for the project in which I used a (C)-like workflow, I have certain knowledge that the pointers to code in the patch tracker are not reliable indicators of the intent of the patches.12:45
persiaI'm less certain about (A) vs. (B).12:45
ChrisPolinI follow.12:46
jmacsI lean towards putting everything possible in the repo at the moment12:46
persiajmacs: That's my preference also.  Do you think it should be mandated?12:46
persiaActually, on reflection, I'm not sure I want to store all the logs of every test run in the repo: that could make clone take a long time.12:47
persia(as long as enough information is stored in the repo to ensure that test logs can be regenerated)12:47
persiaSimilarly, I'm not sure it makes sense to store build artifacts in the repo: better to mandate reproducible builds.12:48
jmacsNo, I don't think we can mandate it now. I'd like it to be, but people are stuck with existing VCS systems and patch managers, rather than ideal ones.12:48
persiaDo you feel comfortable mandating at least (B), where changes in the VCS contain metadata referencing an entry in the patch tracker (which contains the rest of the interesting metadata)?12:50
jmacsYes, I think that's reasonable12:50
persiaChris_Polin: ?12:50
ChrisPolinYes, that makes it easy to regenerate without having huge logs in the repo.12:51
persiaRight, so, mostly because of the limitations of most common implementations, we aren't storing test results in the repos (or even review comments, although we can recommend that (as it is an option in some implementations))12:52
persiaWhich means that the comment in the diagram "In addition, any other artifacts submitted to the system for consumption during the workflow will also reside here." no longer applies (and should be removed).12:53
ChrisPolinNo problem. I'll add a note about the metadata as well.12:54
persiaThe only other thing that occured to me when reading the text was that it appears there is an assumption that there is one production instance.12:56
persiaWhereas, for many projects, there are multiple deployments of the same codebase, possibly with staggered update times, each likely having a parallel instance of the compliance documentation.12:57
persiaWhile I think it is correct that the compliance documentation generation is done by the CD system in parallel with deployment, it may be useful to indicate that this is part of a deployment pipeline to a given target, and that there might be multiple such pipelines.12:58
ChrisPolinSure, that's something I hadn't considered.12:59
persiaA couple examples of how that might apply: if one is developing a ballast control system for an offshore platform, one might deploy that individually to multiple platforms, based on platform-specific criteria.  If one is developing a base node OS for a supercomputing cluster, one is more likely to upload a single image to some interface, and use orchestration automation to cause the nodes to each load the new master image.13:01
persiaSo, for the offshore platform, one has one SSP per instance.  For the supercomputer, one has one SSP per image (which is running on hundreds or thousands of instances).13:02
ChrisPolinI see the distinction, ok.13:03
persiaThere is also the middle case, e.g. different models of car by the same manufacturer.13:05
laurenceurhegyiI think that ChrisPolin has raised a broad concern here, and I think I understand that same feeling. ChrisPolin: Are you asking: what is it about this workflow that is novel? After all, all of the technologies out there, (all the different VCS, CI, CD, etc) are all trying to do good things, right? Are we simply bringing all of their principles together into one overall picture, but not really saying anything new? 13:07
laurenceurhegyiIf we are doing that and not saying anything new, is that okay?13:08
laurenceurhegyiI guess persia touched on this when saying:13:08
laurenceurhegyiI think that in order to be "trustable", we care more about three things than most folk using this sort of tooling.  Those things being A) auditability of patch provenance, patch approvals, etc.; B) Machine-readable requirements/standards; and C) Delivery of sufficient metadata during rollout to support compliance review.13:08
persiaI think the new thing we are saying is "if this set of best practices is followed, the result is trustable".13:08
persiaThe key is to make sure that we define things in enough detail that we can make that assertion.13:09
ChrisPolinWhich is fair enough, and I'm happy with that so long as that is the established scope.13:09
persiaFor example, RCS is an insufficient VCS for our purposes, because we're mandating that patches are reviewed prior to merge.13:09
persiaMind you, a sufficiently advanced patch tracker might handle that, and the automerger might store the final results in RCS, but that's a corner case.  THe same for a developer who uses RCS locally, and generates patches in a distributed-workflow-compliant manner.13:10
persialaurenceurhegyi: Are you happy with that scope?13:10
ChrisPolinYes. I think my initial implementation concerns are assuaged slightly in that the conceptual design of the workflow inherently includes and excludes implementation methods.13:11
paulsherwoodwhere trustable means 'we think it's worth others' time to decide whether they should trust it or not, and there is evidence for them to consider when reaching that decision'13:11
persiaChris_Polin: My worry about specifying implementation is mostly that it might block some future advance that we would otherwise prefer.13:11
laurenceurhegyipersia: the scope makes sense to me. But I see that as a vital detail, and I'd like to ensure we all are in agreement. And the community too. We should also mention it specifically in the Trustable Software Workflow, in my opinion. 13:11
persiapaulsherwood: Good point.  Perhaps we are instead saying "if this set of best practices is followed, enough information is available in the system that it is possible to determine if the result can be trusted."13:12
persialaurenceurhegyi: I agree that we should define the assertion we are making in the text.13:13
laurenceurhegyiYes, that summarises it very well, and should be made clear.13:20
jmacsI'm researching gerrit and notedb at the moment, but right now I'm strongly in favour of keeping review data in-repo13:46
jmacsI doubt we'll be able to mandate that, though, because it means prying people away from github/gitlab/gerrit/mailing lists13:48
persiaFor now, I agree that it cannot be mandated.  I think it should be able to be mandated in the future.  Discussions have gone quiet, but there was good discussion on defining a default format in git for that data last year.  Perhaps that discussion should be restarted.13:49
persiaMy memory was that NoteDB was the basis of the default, but NoteDB wasn't itself finalised at the time.13:50
jmacsI may well end up doing that13:50
laurenceurhegyipedroalvarez: the log seems to have stopped again. Also, bits of the conversation from yesterday and today are missing. 14:19
pedroalvarezyes, I'm aware of that :(14:19
jmacsMISRA C support in GCC was last suggested in 2005 when the response from GCC was roughly "lol, no"14:20
*** brlogger (~supybot@ has joined #trustable14:20
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)14:24
*** brlogger (~supybot@ has joined #trustable14:26
persiajmacs: Was the suggestion "You guys should implement it" or "I'm working on a gcc fork that implements this: would you guys be willing to review some patches?"14:26
*** brlogger (~supybot@ has quit (Remote host closed the connection)14:27
*** brlogger (~supybot@ has joined #trustable14:27
jmacsIt was "are there already any efforts to do this, and is it a good idea?"
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)14:33
persiaI read Daniel's response as "I know of nothing.  I (personally) don't think it's a good idea.  If you want to do it, go ahead."14:35
jmacsThere are other responses14:36
persiaGabriel's response is harder to interpret, as it contains not enough words.  I'm not sure whether it is a statement that it is a terrible idea, or a statement of non-interest for personal reasons.14:36
*** ChrisPolin ( has quit (Quit: Leaving)14:36
*** chrispolin ( has joined #trustable14:38
persiaNeil seems neutral, but references Derek's critique of the flaws in MISRA.  Do those flaws still apply?14:42
jmacsI haven't read MISRA yet so I can't really say14:43
persia was an interesting read for me (Derek's critique)14:44
persiaThinking more about the workflow, I wonder if it makes sense to try to specify what information is entered in each interface, etc.15:05
persiaAlternately, perhaps it makes sense to create activity diagrams to show how things work, and then try to define interfaces.15:05
-*- persia hasn't used UML in long enough to not quite remember the best practices for UML-based design15:06
chrispolinI'll defer to your more experienced judgement when it comes to best practices here.15:07
chrispolinI'm concerned about overloading the diagram, and making it too convoluted to read.15:07
persiaTechnically, the current document cannot reliably be processed as machine-requirements, as it mixes elements of use case diagrams and component diagrams.15:09
persiaI did that to make clear which interfaces expected to be used by humans, but the formalism isn't correct.15:09
persiaIn most of my previous requirements development efforts, I started from either "Use Cases" or "User Stories".  Do we have some of those?  Do we want to create them formally?15:10
paulsherwoodwant to have a go?15:11
chrispolinWe don't have them, currently. This was something that we were discussing earlier, it would be useful to have some user stories/use cases in order to give this a bit of context.15:12
persiaThen let's do that.  My current preference is for user stories, rather than use cases, because I think user stories' use of Personas helps build context.15:13
persiaSo, the first step is to consider who wants to interact with the system.15:14
persiaWhat roles do we have?  I know Developer, Reviewer, Compliance Officer.  Are there others?15:15
persiaDo we need someone who approves merges vs. someone who cannot?15:16
chrispolinI would suggest that this is someone who wishes to create software for a safety/security critical application which complies with one or more standard(s).15:16
persiaDo we need someone who isn't allowed to write code?15:16
chrispolinDoes Compliance Officer == Auditor?15:16
persiachrispolin: Not usually.  Auditors are usually independent.15:16
chrispolinI think we need an Auditor too then.15:17
persiaWe can always add more Personas later, but a first pass towards identifying a set of folk that are part of an organisation that wishes to create software for a safety/security critical application which complies with one or more standards is the current goal.15:17
chrispolin'Someone who isn't allowed to write code' - this is to test that they are unable to, presumably?15:17
chrispolinOr rather, that the system doesn't permit them to.15:18
persiachrispolin: From a User Stories perspective, it is to document that a standard might require that only a certain group of people are permitted to be code authors (e.g. the Auditor may not be allowed to be someone who has written any code).15:18
chrispolinYes, ok.15:19
persiaFrom that, we could create a Scenario to test that the person with that role has their code automatically rejected.15:19
persiaAnother possible case is where code for different components must come from different departments, with no cross-department code submissions permitted.15:19
chrispolinYep, that was my interpretation.15:19
persiaI think that's a dangerous business practice, but if we want the system to be able to represent it, we need to have a cast that includes enough folk to describe that situation.15:20
jmacsI thought that was common practice for redundant systems15:21
persiaHere's one initial set: Alice, a developer in the HMI group; Bob, a developer on the hardware team; Catherine, the compliance officer; David, an Auditor, Ernestine, a project manager; Franklin, an operator15:23
persiaDo those sound like sensible people?  Do we need more?  Do we need to change their jobs?15:23
persiajmacs: Yes, I believe it is common practice for redundant systems.  I still think it is dangerous to prohibit suggestions, as I believe that more eyes makes for better code.  That my philosophy doesn't match soverign purchasing policies means we should probably cover that use case.15:25
persiaSo, Gretel, a developer working on an alternate implementation of the redundant system15:25
persiaAnyone else?15:27
chrispolinNot that I can think of at the minute.15:27
persiaDo we expect the developers will encode requirements/standards, or do we expect this is a separate team?15:27
jmacsThe customer is a person15:28
chrispolinI had read that as Operator.15:28
persiaRight.  Hank, a senior stakeholder and the excutive champion for the project15:28
jmacsAh. I wasn't sure what operator meant tbh.15:29
persiachrispolin: I think of Franklin as the guy who has to run the code, owns the production deployment, etc.15:29
chrispolinAh, like operations. Ok.15:29
persiaRight.  I think we want to tell the story of David auditing Franklin, and how Franklin uses the system to satisfy David's needs.15:29
persiaI expect David will also audit Ernestine, to verify Alice and Bob are not breaking the rules.15:30
persiaGiven that my description of Franklin's role wasn't broadly understood: is there anyone else where anyone has any uncertainty of what they do?15:31
chrispolinJust for clarity; the 'HMI group'?15:32
persiahuman-machine-interface.  "front-end" developer?15:32
persiaMy idea was that Alice and Bob worked in slightly different departments.15:32
persiaBut now I realise that makes it hard for Alice to review Bob's work, and vice versa.15:32
persiaSo I wonder if they should work in the same group (both as just "developer"), or if we want to add other coworkers, because we expect to say something useful about people working in different groups.15:33
persia(e.g. maybe someone can +2 things their team does, but only +1 things for another team)15:33
chrispolinThere needs to be another developer to fulfil the 'Reviewer' role. Is it realistic that they could review each other's code?15:37
persiaI think that we're assuming that the human code review is being done by a qualified human.  Otherwise, I'm not sure of the value of the review.15:38
chrispolinIn which case, it's probably fine as it is, unless you can see another issue stopping them reviewing each other?15:39
persiaAs a result, I think we need to have two developers who work on the same area (and so are qualified to review each other's code).  I'm not sure if we also need another developer, who we don't consider qualified to give full review.15:39
chrispolinOk, so both 'developers' then?15:40
persiaAs long as we don't believe the +1/+2 convention is important to the system, that works.15:40
persiaIf we want to express the idea of +1 vs. +2 votes, then we need a third person.15:40
*** AlisonChaiken ( has quit (Quit: Leaving)15:41
jmacsIn the environments I've worked in, people review each other's code and there's no need for a specific reviewer (just a review role per change)15:43
persiajmacs: My experience matches that.  Generally a "reviewer" is a peer developer working on the same sort of thing.15:44
persiaThat said, in a number of open source projects in which I've worked, there is a distinction made between a "+1" vote, indicating someone likes it, and a "+2" vote, indicating it should be merged.  This distinction is often used to restrict who can authorise the merge.15:44
jmacsI have seen the role of 'gatekeeper' for specific sections of a codebase, whose job it is to either perform an additional review or add other reviewers.15:45
persiaI don't know if that practice is used much outside large open-source projects.15:45
persiaYes.  I consider "gatekeeper" and "maintainer" roughly equivalent to "has +2 authority".15:45
persiaBut I'm not sure if every developer has such authority.15:46
chrispolinIn the context in which Trustable might be important, is it likely that the +1/+2 convention is used?15:47
persiaSo, let's add Irina, a developer in Bob's group who is not a maintainer.15:47
persiachrispolin: Maybe not, but I think "gatekeeper" or "reviewer" might be.15:48
chrispolinNo harm in having them there in which case.15:48
persiaHmm, thinking about it, I think it will be easier to write stories with them all in the same group.15:48
persiaSo, Alice, a senior developer; Bob, a senior developer; Irina, a junior developer.15:49
persiaI think that's probably enough people to start.  We can add more, if we need.15:50
persiaNow, to name some stories.15:51
chrispolinThat covers the same roles without the ambiguity, it's better.15:51
-*- persia is having trouble thinking in titles15:51
persiaAnyway, some synopses:15:52
persiaAlice writes a patch, it passes tests, Bob approves it, it gets merged, it gets deployed, Franklin gets an updated compliance doc.15:52
persiaAlice writes a patch, it passes tests, Irina tries to approve it, it doesn't get merged, Bob approves it, it gets merged.15:53
persiaAlice writes a patch, Bob approves it, it doesn't pass tests, it doesn't get merged, Alice submits an update, It passes tests, Bob approves it, it gets merged.15:54
persiaIrina changes requirements, it doesn't pass tests (not yet implemented), Bob writes an implementation, they pass tests together, Alice approves both, both merge.15:55
-*- persia hopes others chime in with other interesting stories that expose a requirement15:55
persiaDavid audits Franklin, Franklin checks his deployment info, Franklin gets his latest SSP, David is satisfied.15:56
persiaDavid audits Ernestine, Ernestine generates a report showing the activity of Alice, Bob, and Irina, demonstrating compliance with the SDLC.  David is satisfied.15:56
persiaCatherine is notified of an update to a standard governing the project.  Catherine submits the change in the standard.  Alice approves it.  Irina reviews it, and implements a requirements patch.  Bob approves the requirements patch.  Alice writes an implementation.  Bob approves the implementation patch.  All three changes end up in the repositories.15:58
paulsherwoodcan these be in yaml, please? :)15:59
persiapaulsherwood: Absolutely.  Once we get an outline, I expect we'll want to put them in the Mustard-compatible YAML that loom consumes.15:59
paulsherwoodoh... is loom a thing again now? :)16:00
chrispolinApologies, was with thinkl33t trying to figure out this ZNC issue.16:03
persiaMaybe someone else has another preferred YAML syntax for these?16:03
-*- persia has only encountered loom as a tool to read YAML-formatted user stories16:04
chrispolinThe mustard-compatible syntax can be converted to be compliancemasonry-compatible, so it covers that use as well.16:05
jmacsWe've only tried doing that for Mustard's requirements, not the other bits16:06
persiaThen we may as well start with that, and convert it to something else if we decide we need something different.16:06
-*- paulsherwood would be happy to go with loom, if it's open now16:06
persiaI have a GPL-3+ copy of the source, if there isn't another open version, I can share it.16:09
*** chrispolin ( has quit (Quit: Leaving)16:12
persiaAnyway, we've a fair bit of writing to do before we get too concerned with test tooling.16:12
persiaThinking about the standards & requirements changes stories, I wonder if perhaps Ernestine should be a reviewer/approver for those.  She isn't a developer, but as the PM, she presumably has concerns about changes in scope.16:14
*** chrispolin ( has joined #trustable16:15
*** Chris_Polin (~ChrisPoli@ has quit (Quit: ZNC -
persiaAnyone else have any other synopses?  I'm blanking a bit (although perhaps actually writing the stores will make me think of more).16:22
*** chrispolin- (~chrispoli@ has joined #trustable16:24
*** chrispolin ( has quit (Quit: Leaving)16:26
-chrispolin- is now known as chrispolin16:26
chrispolinHi persia, apologies, back on again now.16:30
persiaNo worries.  I just ran out of stories off the top of my head, but worry that I might be missing some that are important.16:30
chrispolinI'll have a think about them and see what more I can add.16:31
laurenceurhegyipaulsherwood: now contains the C Safety and Security Study Group list. I shall announce to the trustable mailing list now.16:37
paulsherwoodw00t! :)16:41
*** brlogger (~supybot@ has joined #trustable16:53
paulsherwoodlaurenceurhegyi: you really need to either snip, or top-post occasionally :)16:54
laurenceurhegyiI knew I'd make some sort of faux pas :( 16:56
laurenceurhegyiI wanted to retain some of the previous info, and I thought top-posting was disliked?  16:56
rjekSnip everything you're not directly replying to, reply inline16:57
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)16:57
*** brlogger (~supybot@ has joined #trustable16:58
-*- laurenceurhegyi takes note16:59
jmacsAlternatively, reply in the style of the message you're replying to17:00
laurenceurhegyicould you define 'style'? 17:01
jmacstop-posting or bottom-posting are styles17:02
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)17:06
*** brlogger (~supybot@ has joined #trustable17:09
*** ctbruce ( has quit (Quit: Leaving)17:13
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)17:19
*** brlogger (~supybot@ has joined #trustable17:20
*** laurenceurhegyi ( has quit (Ping timeout: 245 seconds)17:22
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)17:24
*** brlogger (~supybot@ has joined #trustable17:24
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)17:44
*** brlogger (~supybot@ has joined #trustable17:45
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:01
*** brlogger (~supybot@ has joined #trustable18:03
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:07
*** brlogger (~supybot@ has joined #trustable18:07
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:21
*** brlogger (~supybot@ has joined #trustable18:23
*** ctbruce (~bruceunde@2a02:c7f:4430:d400:b154:165c:755a:f4cb) has joined #trustable18:23
*** bruceunderhill (~bruceunde@2a02:c7f:4430:d400:b154:165c:755a:f4cb) has joined #trustable18:23
*** ctbruce (~bruceunde@2a02:c7f:4430:d400:b154:165c:755a:f4cb) has quit (Client Quit)18:26
*** toscalix (~toscalix@ has quit (Quit: Konversation terminated!)18:41
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:41
*** brlogger (~supybot@ has joined #trustable18:43
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:49
*** brlogger (~supybot@ has joined #trustable18:49
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)18:53
*** brlogger (~supybot@ has joined #trustable18:54
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)19:04
*** brlogger (~supybot@ has joined #trustable19:04
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)19:22
*** brlogger (~supybot@ has joined #trustable19:24
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)19:54
*** brlogger (~supybot@ has joined #trustable19:56
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)20:10
*** brlogger (~supybot@ has joined #trustable20:12
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)20:18
*** brlogger (~supybot@ has joined #trustable20:19
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)20:57
*** brlogger (~supybot@ has joined #trustable20:58
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:06
*** brlogger (~supybot@ has joined #trustable21:07
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:18
*** brlogger (~supybot@ has joined #trustable21:19
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:27
*** brlogger (~supybot@ has joined #trustable21:28
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:36
*** brlogger (~supybot@ has joined #trustable21:36
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:38
*** brlogger (~supybot@ has joined #trustable21:39
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)21:51
*** brlogger (~supybot@ has joined #trustable21:53
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)22:23
*** brlogger (~supybot@ has joined #trustable22:23
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)22:37
*** brlogger (~supybot@ has joined #trustable22:38
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)22:48
*** brlogger (~supybot@ has joined #trustable22:50
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)23:20
*** brlogger (~supybot@ has joined #trustable23:20
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)23:22
*** brlogger (~supybot@ has joined #trustable23:23
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)23:41
*** brlogger (~supybot@ has joined #trustable23:41
*** bruceunderhill (~bruceunde@2a02:c7f:4430:d400:b154:165c:755a:f4cb) has quit (Quit: Leaving)23:45
*** brlogger (~supybot@ has quit (Read error: Connection reset by peer)23:57
*** brlogger (~supybot@ has joined #trustable23:59

Generated by 2.14.0 by Marius Gedminas - find it at!