| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Use a predefined byte order, integers with fixed width, etc.
|
|
|
|
|
| |
This makes sure we use the dual-stack feature to support both IPv4 and
IPv6.
|
| |
|
| |
|
|
|
|
| |
This allows free workers to pick up jobs after dead workers.
|
|
|
|
| |
Found when running in Docker.
|
|
|
|
|
|
|
| |
The problem is pthread_cond_destroy is unsafe to call if there're
threads waiting in pthread_cond_wait. I'm not sure this fix is enough:
what if the "broadcast" doesn't reach the threads until we call
pthread_cond_destroy? Does it even work that way? Idk
|
|
|
|
|
| |
Previously, the client had no way to distinguish errors from succesful
calls.
|
| |
|
|
|
|
|
| |
pthread_attr_setsigmask_np is only available since 2.32, which is too
modern.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Thanks, Valgrind!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, I had a stupid system where I would create a thread after
every accept(), and put worker descriptors in a queue. A special
"scheduler" thread would then pick them out, and give out jobs to
complete.
The problem was, of course, I couldn't conveniently poll job status from
workers. I thought about using poll(), but that turned out to be a
horribly complicated API. How do I deal with partial reads, for example?
I don't honestly know.
Then it hit me that I could just use the threads that handle accept()ed
connections as "worker threads", which would synchronously schedule jobs
and wait for them to complete. This solves every problem and removes the
need for a lot of inter-thread synchronization magic. It even works now,
holy crap! You can launch and terminate workers at will, and they will
pick up new jobs automatically.
As a side not, msg_recv_and_handle turned out to be too limiting and
complicated for me, so I got rid of that, and do normal
msg_recv/msg_send calls.
|
|
|
|
|
| |
Apparently, if you try to write() into a socket with the other party
already gone, your process receives a SIGPIPE. Wtf?
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
do { ... } while (0) is objectively better:
https://stackoverflow.com/q/1067226/514684
|
|
|
|
| |
pthread functions return positive error codes.
|
|
|
|
|
|
|
|
|
|
| |
Well, maybe "graceful" is a strong word, but now you _can_ do
./server &
./worker &
./client ci_run URL REV && kill "$( pidof worker )"
and the worker will wait for the CI run to complete.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a basic "worker" program.
You can now do something like
./server &
./worker &
./client ci_run URL REV
and the server should pass a message to worker, after which it should
clone the repository at URL, checkout REV, and try to run the CI script.
It's extremely unfinished: I need to sort out the graceful shutdown, how
the server manages workers, etc.
|
|
|
|
| |
This is a dumb warning.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
First, rename all API functions so that they start with net_.
Second, abstract the basic TCP server functionality into tcp_server.c.
This includes reworking net_accept so that it's a simple blocking
operation, and putting the callback stuff to tcp_server.c. Also, the
server now uses detached threads instead of fork(), since I want
connection handlers to share memory.
|
| |
|