Query did not always return the last run as required due to
implementation-defined behaviour of mixing aggregate and
non-aggregate columns with group-by
Lose the boost dependency since recent versions of capnproto's kj
also provide a nice filesystem library. Take the opportunity to
refactor the Run object to become more than POD and to encapsulate
some of the functionality that was done in the Laminar class
Part of #49 refactor
- regressions and recoveries: list of jobs whose run status changed,
ordered first by currently failing jobs, secondly by count of jobs
since the status change, descending for currently failing jobs and
ascending for currently passing jobs
- low pass rates: list of the jobs with the worst pass rates calculated
over all time
- run time changes: jobs with the largest changes in build time. This
is calculated as the difference between the range and the standard
deviation over the past 10 runs.
- average run time distribution: shows the number of jobs in the
system divided into buckets based on their average runtime
The computed list of filtered jobs wasn't updated when a notification
was received from the server. Switch to using a method rather than a
computed property to fix this. Also add tags to jobs reported in
job_started and job_completed notifications
This feature allows runs to be sorted by result, number, start time
or duration, in ascending or descending order, on the Job page. Request
is processed server-side so that the correct page division can be done.
Currently running jobs are not sorted.
This is a refactor that more cleanly uses the kj framework for handling
processes spawned by Runs. This obviates the workaround introduced back in
ff42dae7cc, and incidentally now requires c++14.
Part of #49 refactor
The old implementation slurped the whole artefact into memory, and
did not ensure it remained allocated beyond the first call to write().
The new implementation uses mmap and ensures the mapping lasts until
the file has been delivered
On 32-bit Linux, time_t is a long. Laminar extensively uses
time_t but provided only int and int64 db access, making the
use of long ambiguous. Since there is no explicit use of int64,
and because on 32-bit Linux long and int are recognized as
different types despite being the same width, replacing the
int64 handlers with long handlers fixes the compile error
This fixes a bug where the last pieces of console output were lost. In that
case, the event loop scheduled the ended child process's SIGCHLD handler
before the handler to read the last of the process's output. We work around
that by doing an additional (non-blocking) read in the SIGCHLD handler
reading into a static buffer is a race condition that is only
manifested under load. There's no guarantee the clause in then()
will run before another task overwrites the buffer. Allocating
a local string is the only correct solution
This cleans up some inconsistency where sometimes 'completed-started' happened
on the client side and sometimes on the server. Also should fix the 'cumulative
time' issue mentioned in #8