Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Difference From qsu-0.5.0 To qsu-0.6.0
2024-10-05
| ||
20:13 | Docs. To make clap documentation make more sense. check-in: c99a00ffd2 user: jan tags: trunk | |
2024-09-13
| ||
01:15 | Update description. check-in: 8f422d4ee5 user: jan tags: qsu-0.6.0, trunk | |
01:07 | Cleanup. check-in: 8984fa32cf user: jan tags: trunk | |
2024-05-20
| ||
00:11 | Merge. check-in: 343bfcf196 user: jan tags: trunk | |
00:02 | Release maintenance. Closed-Leaf check-in: 9190e0edaa user: jan tags: qsu-0.5.0, tracing-filter | |
2024-05-19
| ||
23:12 | Cleanup. check-in: 80ff88f60d user: jan tags: tracing-filter | |
Changes to .efiles.
1 2 3 4 5 6 7 8 9 10 | Cargo.toml README.md www/index.md www/design-notes.md www/changelog.md src/err.rs src/lib.rs src/lumberjack.rs src/rt.rs src/rt/nosvc.rs | > > | 1 2 3 4 5 6 7 8 9 10 11 12 | Cargo.toml README.md www/index.md www/opinionated.md www/qsurt.md www/design-notes.md www/changelog.md src/err.rs src/lib.rs src/lumberjack.rs src/rt.rs src/rt/nosvc.rs |
︙ | ︙ | |||
22 23 24 25 26 27 28 29 30 31 32 33 34 | src/installer/launchd.rs src/installer/systemd.rs src/argp.rs tests/err/mod.rs tests/apps/mod.rs tests/apperr.rs tests/initrunshutdown.rs examples/hellosvc.rs examples/hellosvc-tokio.rs examples/hellosvc-rocket.rs examples/err/mod.rs examples/procres/mod.rs examples/argp/mod.rs | > > > | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | src/installer/launchd.rs src/installer/systemd.rs src/argp.rs tests/err/mod.rs tests/apps/mod.rs tests/apperr.rs tests/initrunshutdown.rs examples/simplesync.rs examples/simpletokio.rs examples/simplerocket.rs examples/hellosvc.rs examples/hellosvc-tokio.rs examples/hellosvc-rocket.rs examples/err/mod.rs examples/procres/mod.rs examples/argp/mod.rs |
Changes to Cargo.toml.
1 2 | [package] name = "qsu" | | | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | [package] name = "qsu" version = "0.6.0" edition = "2021" license = "0BSD" # https://crates.io/category_slugs categories = [ "os" ] keywords = [ "service", "systemd", "winsvc" ] repository = "https://repos.qrnch.tech/pub/qsu" description = "Service subsystem utilities and runtime wrapper." rust-version = "1.70" exclude = [ ".fossil-settings", ".efiles", ".fslckout", "www", "bacon.toml", "build_docs.sh", "Rocket.toml", "rustfmt.toml" ] # https://doc.rust-lang.org/cargo/reference/manifest.html#the-badges-section [badges] |
︙ | ︙ | |||
31 32 33 34 35 36 37 | systemd = ["dep:sd-notify"] rocket = ["rt", "dep:rocket", "tokio"] rt = [] tokio = ["rt", "tokio/macros", "tokio/rt-multi-thread", "tokio/signal"] wait-for-debugger = ["dep:dbgtools-win"] [dependencies] | < | | | | | | | | | | | | > > > > > > > > | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | systemd = ["dep:sd-notify"] rocket = ["rt", "dep:rocket", "tokio"] rt = [] tokio = ["rt", "tokio/macros", "tokio/rt-multi-thread", "tokio/signal"] wait-for-debugger = ["dep:dbgtools-win"] [dependencies] async-trait = { version = "0.1.82" } chrono = { version = "0.4.38" } clap = { version = "4.5.17", optional = true, features = [ "derive", "env", "string", "wrap_help" ] } env_logger = { version = "0.11.5" } futures = { version = "0.3.30" } itertools = { version = "0.13.0", optional = true } killswitch = { version = "0.4.2" } log = { version = "0.4.22" } parking_lot = { version = "0.12.3" } rocket = { version = "0.5.1", optional = true } sidoc = { version = "0.1.0", optional = true } tokio = { version = "1.40.0", features = ["sync"] } time = { version = "0.3.36", features = ["macros"] } tracing = { version = "0.1.40" } [dependencies.tracing-subscriber] version = "0.3.18" default-features = false features = ["env-filter", "time", "fmt", "ansi"] [target.'cfg(target_os = "linux")'.dependencies] sd-notify = { version = "0.4.2", optional = true } [target.'cfg(unix)'.dependencies] libc = { version = "0.2.158" } nix = { version = "0.29.0", features = ["pthread", "signal", "time"] } [target.'cfg(windows)'.dependencies] dbgtools-win = { version = "0.2.1", optional = true } eventlog = { version = "0.2.2" } registry = { version = "1.2.3" } scopeguard = { version = "1.2.0" } windows-service = { version = "0.7.0" } windows-sys = { version = "0.59.0", features = [ "Win32_Foundation", "Win32_System_Console" ] } winreg = { version = "0.52.0" } [dev-dependencies] clap = { version = "4.5.4", features = ["derive", "env", "wrap_help"] } tokio = { version = "1.40.0", features = ["time"] } [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs", "--generate-link-to-definition"] [[example]] name = "hellosvc" required-features = ["clap", "installer", "rt"] [[example]] name = "hellosvc-tokio" required-features = ["clap", "installer", "rt", "tokio"] [[example]] name = "hellosvc-rocket" required-features = ["clap", "installer", "rt", "rocket"] [lints.clippy] all = { level = "deny", priority = -1 } pedantic = { level = "warn", priority = -1 } nursery = { level = "warn", priority = -1 } cargo = { level = "warn", priority = -1 } multiple_crate_versions = "allow" |
Changes to README.md.
1 2 | # qsu | | > > | | | < > | 1 2 3 4 5 6 7 8 9 10 11 | # qsu The _qsu_ ("kazoo") crate offers portable service utilities with an opinionated service wrapper runtime. _qsu_'s primary objective is to allow a service developer to focus on the actual service application code, without having to bother with service subsystem-specific integrations -- while at the same time allowing the service application to run as a regular foreground process, without the code needing to diverge between the two. |
Added bacon.toml.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | # This is a configuration file for the bacon tool # # Bacon repository: https://github.com/Canop/bacon # Complete help on configuration: https://dystroy.org/bacon/config/ # You can also check bacon's own bacon.toml file # as an example: https://github.com/Canop/bacon/blob/main/bacon.toml # For information about clippy lints, see: # https://github.com/rust-lang/rust-clippy/blob/master/README.md #default_job = "check" default_job = "clippy-all" [jobs.check] command = ["cargo", "check", "--color", "always"] need_stdout = false [jobs.check-all] command = ["cargo", "check", "--all-targets", "--color", "always"] need_stdout = false # Run clippy on the default target [jobs.clippy] command = [ "cargo", "clippy", "--all-features", "--color", "always", ] need_stdout = false # Run clippy on all targets # To disable some lints, you may change the job this way: # [jobs.clippy-all] # command = [ # "cargo", "clippy", # "--all-targets", # "--color", "always", # "--", # "-A", "clippy::bool_to_int_with_if", # "-A", "clippy::collapsible_if", # "-A", "clippy::derive_partial_eq_without_eq", # ] # need_stdout = false [jobs.clippy-all] command = [ "cargo", "clippy", "--all-features", "--all-targets", "--color", "always", ] need_stdout = false # This job lets you run # - all tests: bacon test # - a specific test: bacon test -- config::test_default_files # - the tests of a package: bacon test -- -- -p config [jobs.test] command = [ "cargo", "test", "--color", "always", "--", "--color", "always", # see https://github.com/Canop/bacon/issues/124 ] need_stdout = true [jobs.doc] command = ["cargo", "doc", "--color", "always", "--no-deps"] need_stdout = false # If the doc compiles, then it opens in your browser and bacon switches # to the previous job [jobs.doc-open] command = ["cargo", "doc", "--color", "always", "--no-deps", "--open"] need_stdout = false on_success = "back" # so that we don't open the browser at each change # You can run your application and have the result displayed in bacon, # *if* it makes sense for this crate. # Don't forget the `--color always` part or the errors won't be # properly parsed. # If your program never stops (eg a server), you may set `background` # to false to have the cargo run output immediately displayed instead # of waiting for program's end. [jobs.run] command = [ "cargo", "run", "--color", "always", # put launch parameters for your program behind a `--` separator ] need_stdout = true allow_warnings = true background = true # This parameterized job runs the example of your choice, as soon # as the code compiles. # Call it as # bacon ex -- my-example [jobs.ex] command = ["cargo", "run", "--color", "always", "--example"] need_stdout = true allow_warnings = true # You may define here keybindings that would be specific to # a project, for example a shortcut to launch a specific job. # Shortcuts to internal functions (scrolling, toggling, etc.) # should go in your personal global prefs.toml file instead. [keybindings] # alt-m = "job:my-job" c = "job:clippy-all" # comment this to have 'c' run clippy on only the default target |
Changes to examples/argp/mod.rs.
1 2 | use clap::ArgMatches; | | | | > > | | | | > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | use clap::ArgMatches; use qsu::{installer::RegSvc, rt::SrvAppRt}; use crate::err::Error; pub struct AppArgsProc { pub(crate) bldr: Box<dyn Fn() -> SrvAppRt<Error>> } impl qsu::argp::ArgsProc for AppArgsProc { type AppErr = Error; /// Process an `register-service` subcommand. fn proc_inst( &mut self, _sub_m: &ArgMatches, regsvc: RegSvc ) -> Result<RegSvc, Self::AppErr> { // This is split out into its own function because the orphan rule wouldn't // allow the application to implement a std::io::Error -> qsu::AppErr // conversion in one go, so we do it in two steps instead. // proc_inst_inner()'s '?' converts "all" errors into 'Error`. // The proc_inst() method's `?` converts from `Error` to `qsu::AppError` proc_inst_inner(regsvc) } fn build_apprt(&mut self) -> Result<SrvAppRt<Error>, Self::AppErr> { Ok((self.bldr)()) } } fn proc_inst_inner(regsvc: RegSvc) -> Result<RegSvc, Error> { // Use current working directory as the service's workdir let cwd = std::env::current_dir()?.to_str().unwrap().to_string(); let mut regsvc = regsvc .workdir(cwd) .env("HOLY", "COW") .env("Private", "Public") .env("General", "Specific"); // Set display name, but only if it wasn't already specified on the command // line if regsvc.display_name.is_none() { regsvc.display_name_ref("Hello Service"); } // Hard-code servie as supporting conf-reload service events. let regsvc = regsvc.conf_reload(); // Add a callback that will increase log and trace levels by deafault. #[cfg(windows)] let regsvc = regsvc.regconf(|_svcname, params| { // Leave a key/value pair in the registry as a hello params.set_value("AppArgParser", &"SaysHello")?; Ok(()) }); Ok(regsvc) } |
︙ | ︙ |
Changes to examples/err/mod.rs.
1 2 3 4 5 6 7 8 9 | use std::{fmt, io}; #[derive(Debug)] pub enum Error { IO(String), Qsu(String) } impl std::error::Error for Error {} | < < | < < | < | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | use std::{fmt, io}; #[derive(Debug)] pub enum Error { IO(String), Qsu(String) } impl std::error::Error for Error {} impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Self::IO(s) => write!(f, "I/O error; {s}"), Self::Qsu(s) => write!(f, "qsu error; {s}") } } } impl From<io::Error> for Error { fn from(err: io::Error) -> Self { Self::IO(err.to_string()) } } impl From<qsu::CbErr<Self>> for Error { fn from(err: qsu::CbErr<Self>) -> Self { Self::Qsu(err.to_string()) } } /* /// Convenience converter used to pass application-defined errors from the /// inner callback back out from the qsu runtime. impl From<Error> for qsu::Error { fn from(err: Error) -> qsu::Error { qsu::Error::app(err) } } */ /// Convenience converter for mapping application-specific errors to /// `qsu::CbErr::App`. impl From<Error> for qsu::CbErr<Error> { fn from(err: Error) -> Self { Self::App(err) } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to examples/hellosvc-rocket.rs.
1 2 3 4 5 6 7 8 9 | #[macro_use] extern crate rocket; mod argp; mod err; mod procres; use qsu::{ argp::ArgParser, | < | < | | > | > > > > | > | < | | | < < < < < < < | | > > > > > | | > > > | | > > > > | > > > > > > > | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | #[macro_use] extern crate rocket; mod argp; mod err; mod procres; use qsu::{ argp::ArgParser, rt::{InitCtx, RocketServiceHandler, RunEnv, SrvAppRt, SvcEvt, TermCtx} }; use tokio::sync::mpsc; use rocket::{Build, Ignite, Rocket}; use err::Error; use procres::ProcRes; struct MyService { rx: mpsc::UnboundedReceiver<SvcEvt> } #[qsu::async_trait] impl RocketServiceHandler for MyService { type AppErr = Error; async fn init( &mut self, ictx: InitCtx ) -> Result<Vec<Rocket<Build>>, Self::AppErr> { tracing::trace!("Running init()"); let mut rockets = vec![]; ictx.report(Some("Building a rocket!".into())); let rocket = rocket::build().mount("/", routes![index]); ictx.report(Some("Pushing a rocket".into())); rockets.push(rocket); Ok(rockets) } #[allow(clippy::redundant_pub_crate)] async fn run( &mut self, rockets: Vec<Rocket<Ignite>>, _re: &RunEnv ) -> Result<(), Self::AppErr> { for rocket in rockets { tokio::task::spawn(async { rocket.launch().await.unwrap(); }); } loop { tokio::select! { evt = self.rx.recv() => { match evt { Some(SvcEvt::Shutdown(_)) => { tracing::info!("The service subsystem requested that the application shut down"); break; } Some(SvcEvt::ReloadConf) => { tracing::info!("The service subsystem requested that application reload configuration"); } _ => { } } } } } Ok(()) } async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr> { tctx.report(Some(format!("Entered {}", "shutdown").into())); tracing::trace!("Running shutdown()"); Ok(()) } } fn main() -> ProcRes { // In the future we'll be able to use Try to implement support for implicit // conversion to ProcRes from a Result using `?`, but for now use this hack. ProcRes::into(main2().into()) } fn main2() -> Result<(), Error> { // Derive default service name from executable name. let svcname = qsu::default_service_name() .expect("Unable to determine default service name"); let bldr = Box::new(|| { let (tx, rx) = mpsc::unbounded_channel(); let svcevt_handler = Box::new(move |msg| { // Just foward messages to runtime handler tx.send(msg).unwrap(); }); let rt_handler = Box::new(MyService { rx }); SrvAppRt::Rocket { svcevt_handler, rt_handler } }); // Parse, and process, command line arguments. let mut argsproc = argp::AppArgsProc { bldr }; let ap = ArgParser::new(&svcname, &mut argsproc).regsvc_proc(|mut regsvc| { // Hard-code this as a network service application. regsvc.netservice_ref(); // Set description, but only if it hasn't been set already (presumably // via the command line). if regsvc.description.is_none() { regsvc.description_ref("A simple Rocket server that logs requests"); } regsvc }); ap.proc()?; Ok(()) } #[get("/")] fn index() -> &'static str { |
︙ | ︙ |
Changes to examples/hellosvc-tokio.rs.
1 2 3 4 5 6 7 8 9 10 | //! Simple service that does nothing other than log/trace every N seconds. mod argp; mod err; mod procres; use std::time::{Duration, Instant}; use qsu::{ argp::ArgParser, | > > < | < < | > > > > | > | < < < < > > | > > | | | | < < < < < < | | > > > > > | | > > > > | | < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | //! Simple service that does nothing other than log/trace every N seconds. mod argp; mod err; mod procres; use std::time::{Duration, Instant}; use tokio::sync::mpsc; use qsu::{ argp::ArgParser, rt::{InitCtx, RunEnv, SrvAppRt, SvcEvt, TermCtx, TokioServiceHandler} }; use err::Error; use procres::ProcRes; struct MyService { rx: mpsc::UnboundedReceiver<SvcEvt> } #[qsu::async_trait] impl TokioServiceHandler for MyService { type AppErr = Error; async fn init(&mut self, ictx: InitCtx) -> Result<(), Self::AppErr> { ictx.report(Some("Entered init".into())); tracing::trace!("Running init()"); Ok(()) } #[allow(clippy::redundant_pub_crate)] async fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { const SECS: u64 = 30; // unwrap() is okay due to time scales involed let mut last_dump = Instant::now() .checked_sub(Duration::from_secs(SECS)) .unwrap(); loop { if last_dump.elapsed() > Duration::from_secs(SECS) { log::error!("error"); log::warn!("warn"); log::info!("info"); log::debug!("debug"); log::trace!("trace"); tracing::error!("error"); tracing::warn!("warn"); tracing::info!("info"); tracing::debug!("debug"); tracing::trace!("trace"); last_dump = Instant::now(); } tokio::select! { () = tokio::time::sleep(std::time::Duration::from_secs(1)) => { continue; } evt = self.rx.recv() => { match evt { Some(SvcEvt::Shutdown(_)) => { tracing::info!("The service subsystem requested that the application shut down"); break; } Some(SvcEvt::ReloadConf) => { tracing::info!("The service subsystem requested that application reload configuration"); } _ => { } } } } } Ok(()) } async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr> { tctx.report(Some(("Entered shutdown".to_string()).into())); tracing::trace!("Running shutdown()"); Ok(()) } } fn main() -> ProcRes { // In the future we'll be able to use Try to implement support for implicit // conversion to ProcRes from a Result using `?`, but for now use this hack. ProcRes::into(main2().into()) } fn main2() -> Result<(), Error> { // Derive default service name from executable name. let svcname = qsu::default_service_name() .expect("Unable to determine default service name"); let bldr = Box::new(|| { let (tx, rx) = mpsc::unbounded_channel(); let svcevt_handler = Box::new(move |msg| { // Just foward messages to runtime handler tx.send(msg).unwrap(); }); let rt_handler = Box::new(MyService { rx }); SrvAppRt::Tokio { rtbldr: None, svcevt_handler, rt_handler } }); // Parse, and process, command line arguments. let mut argsproc = argp::AppArgsProc { bldr }; let ap = ArgParser::new(&svcname, &mut argsproc); ap.proc()?; Ok(()) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to examples/hellosvc.rs.
1 2 3 4 5 6 7 8 9 10 11 12 13 | //! Simple service that does nothing other than log/trace every N seconds. mod argp; mod err; mod procres; use std::{ thread, time::{Duration, Instant} }; use qsu::{ argp::ArgParser, | > > < | < | > | > > > | < < < < | > | > > | | | | < < < < < < < < | | | > > | > | > > > | > | > > > | < < > | > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | //! Simple service that does nothing other than log/trace every N seconds. mod argp; mod err; mod procres; use std::{ thread, time::{Duration, Instant} }; use tokio::sync::mpsc; use qsu::{ argp::ArgParser, rt::{InitCtx, RunEnv, ServiceHandler, SrvAppRt, SvcEvt, TermCtx} }; use err::Error; use procres::ProcRes; struct MyService { rx: mpsc::UnboundedReceiver<SvcEvt> } impl ServiceHandler for MyService { type AppErr = Error; fn init(&mut self, ictx: InitCtx) -> Result<(), Self::AppErr> { ictx.report(Some("Entered init".into())); tracing::trace!("Running init()"); Ok(()) } fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { const SECS: u64 = 30; // unwrap() is okay due to time scale let mut last_dump = Instant::now() .checked_sub(Duration::from_secs(SECS)) .unwrap(); loop { if last_dump.elapsed() > Duration::from_secs(SECS) { log::error!("error"); log::warn!("warn"); log::info!("info"); log::debug!("debug"); log::trace!("trace"); tracing::error!("error"); tracing::warn!("warn"); tracing::info!("info"); tracing::debug!("debug"); tracing::trace!("trace"); last_dump = Instant::now(); } match self.rx.try_recv() { Ok(SvcEvt::Shutdown(_)) => { tracing::info!("Service application shutdown"); break; } Ok(SvcEvt::ReloadConf) => { tracing::info!( "The service subsystem requested that the application reload its \ configuration" ); } _ => {} } thread::sleep(std::time::Duration::from_secs(1)); } Ok(()) } fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr> { tctx.report(Some(("Entered shutdown".to_string()).into())); tracing::trace!("Running shutdown()"); Ok(()) } } fn main() -> ProcRes { // In the future we'll be able to use Try to implement support for implicit // conversion to ProcRes from a Result using `?`, but for now use this hack. ProcRes::into(main2().into()) } fn main2() -> Result<(), Error> { // Derive default service name from executable name. let svcname = qsu::default_service_name() .expect("Unable to determine default service name"); // Create a closure that will generate a sync service runtime with a // service event handler that will forward any messages it receives to a // channel receiver end-point held by the main service handler. let bldr = Box::new(|| { let (tx, rx) = mpsc::unbounded_channel(); SrvAppRt::Sync { svcevt_handler: Box::new(move |msg| { // Just foward messages to runtime handler tx.send(msg).unwrap(); }), rt_handler: Box::new(MyService { rx }) } }); // Parse, and process, command line arguments. // Add a service registration callback that will set service's description. let mut argsproc = argp::AppArgsProc { bldr }; let ap = ArgParser::new(&svcname, &mut argsproc).regsvc_proc(|mut regsvc| { // Set description, but only if it hasn't been set already (presumably // via the command line). if regsvc.description.is_none() { regsvc .description_ref("A sync server that says hello every 30 seconds"); } regsvc }); ap.proc()?; Ok(()) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to examples/procres/mod.rs.
1 2 3 4 5 6 7 8 9 10 11 12 13 | use std::process::{ExitCode, Termination}; use crate::err::Error; #[repr(u8)] pub enum ProcRes { Success, Error(Error) } impl Termination for ProcRes { fn report(self) -> ExitCode { match self { | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | use std::process::{ExitCode, Termination}; use crate::err::Error; #[repr(u8)] pub enum ProcRes { Success, Error(Error) } impl Termination for ProcRes { fn report(self) -> ExitCode { match self { Self::Success => { //eprintln!("Process terminated successfully"); ExitCode::from(0) } Self::Error(e) => { eprintln!("Abnormal termination: {e}"); ExitCode::from(1) } } } } impl<T> From<Result<T, Error>> for ProcRes { fn from(res: Result<T, Error>) -> Self { match res { Ok(_) => Self::Success, Err(e) => Self::Error(e) } } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Added examples/simplerocket.rs.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | #[cfg(all(feature = "rt", feature = "rocket"))] #[macro_use] extern crate rocket; #[cfg(all(feature = "rt", feature = "rocket"))] mod gate { use tokio::{sync::mpsc, task}; use qsu::rocket::Rocket; use qsu::{ async_trait, rt::{ InitCtx, RocketServiceHandler, RunCtx, RunEnv, SrvAppRt, SvcEvt, TermCtx }, tracing }; #[derive(Debug)] enum MyError {} struct MyService { rx: Option<mpsc::UnboundedReceiver<SvcEvt>> } #[async_trait] impl RocketServiceHandler for MyService { type AppErr = MyError; async fn init( &mut self, ictx: InitCtx ) -> Result<Vec<Rocket<rocket::Build>>, Self::AppErr> { tracing::info!("Running RocketServiceHandler::init()"); ictx.report(Some("Building a rocket".into())); let rockets = vec![rocket::build().mount("/", routes![index])]; ictx.report(Some("Finalized service handler initialization".into())); Ok(rockets) } async fn run( &mut self, mut rockets: Vec<Rocket<rocket::Ignite>>, _re: &RunEnv ) -> Result<(), Self::AppErr> { tracing::info!("Running RocketServiceHandler::run()"); // Spawn a task to handle service events let mut rx = self.rx.take().unwrap(); task::spawn(async move { loop { if let Some(msg) = rx.recv().await { tracing::info!("Application received qsu event {:?}", msg); if let SvcEvt::Shutdown(_) = msg { break; } } } }); if let Some(rocket) = rockets.pop() { rocket.launch().await.unwrap(); } Ok(()) } async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr> { tracing::info!("Running RocketServiceHandler::shutdown()"); tctx.report(Some("Terminating service handler".into())); // .. do termination here .. tctx.report(Some("Finalized service handler termination".into())); Ok(()) } } pub fn main() { let svcname = qsu::default_service_name().unwrap(); // Channel used to signal termination from service event handler to main // application let (tx, rx) = mpsc::unbounded_channel(); // Service event handler let svcevt_handler = move |msg| { let _ = tx.send(msg); }; // Set up main service runtime handler let svcrt = MyService { rx: Some(rx) }; // Define service application runtime components let apprt = SrvAppRt::Rocket { svcevt_handler: Box::new(svcevt_handler), rt_handler: Box::new(svcrt) }; // Launch service let rctx = RunCtx::new(&svcname); rctx.run(apprt).unwrap(); } #[get("/")] fn index() -> &'static str { log::error!("error"); log::warn!("warn"); log::info!("info"); log::debug!("debug"); log::trace!("trace"); tracing::error!("error"); tracing::warn!("warn"); tracing::info!("info"); tracing::debug!("debug"); tracing::trace!("trace"); "Hello, world!" } } fn main() { #[cfg(all(feature = "rt", feature = "rocket"))] gate::main(); #[cfg(not(all(feature = "rt", feature = "rocket")))] println!("simplerocket example requires the rt and rocket features"); } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Added examples/simplesync.rs.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | #[cfg(feature = "rt")] mod gate { use std::{sync::mpsc, time::Duration}; #[derive(Debug)] enum MyError {} use qsu::{ rt::{ Demise, InitCtx, RunCtx, RunEnv, ServiceHandler, SrvAppRt, SvcEvt, TermCtx, UserSig }, tracing }; struct MyService { rx_term: mpsc::Receiver<()> } impl ServiceHandler for MyService { type AppErr = MyError; fn init(&mut self, _ictx: InitCtx) -> Result<(), Self::AppErr> { tracing::info!("Running ServiceHandler::init()"); Ok(()) } fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { tracing::info!("Running ServiceHandler::run()"); tracing::info!("Wait for termination event for up to 4 seconds .."); let _ = self.rx_term.recv_timeout(Duration::from_secs(4)); Ok(()) } fn shutdown(&mut self, _tctx: TermCtx) -> Result<(), Self::AppErr> { tracing::info!("Running ServiceHandler::shutdown()"); Ok(()) } } pub fn main() { let svcname = qsu::default_service_name().unwrap(); // Channel used to signal termination from service event handler to main // application let (tx, rx) = mpsc::channel::<()>(); // Service event handler let svcevt_handler = move |msg| match msg { SvcEvt::Shutdown(demise) => { tracing::debug!("Shutdown event received"); match demise { Demise::Interrupted => { tracing::info!("Service application was interrupted"); } Demise::Terminated => { tracing::info!("Service application was terminated"); } Demise::ReachedEnd => { tracing::info!("Service application reached its end"); } } // Once a Shutdown event has been received, signal to the main service // callback to terminate. tx.send(()).unwrap(); } SvcEvt::User(us) => match us { UserSig::Sig1 => { log::info!("User signal 1"); } UserSig::Sig2 => { log::info!("User signal 2"); } }, _ => {} }; // Set up main service runtime handler let svcrt = MyService { rx_term: rx }; // Define service application runtime components let apprt = SrvAppRt::Sync { svcevt_handler: Box::new(svcevt_handler), rt_handler: Box::new(svcrt) }; // Launch service let rctx = RunCtx::new(&svcname); rctx.run(apprt).unwrap(); } } fn main() { #[cfg(feature = "rt")] gate::main(); #[cfg(not(feature = "rt"))] println!("simplesync example requires the rt feature"); } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Added examples/simpletokio.rs.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | #[cfg(all(feature = "rt", feature = "tokio"))] mod gate { use std::time::Duration; use tokio::{sync::mpsc, time}; use qsu::{ async_trait, rt::{ InitCtx, RunCtx, RunEnv, SrvAppRt, SvcEvt, TermCtx, TokioServiceHandler, UserSig }, tracing }; #[derive(Debug)] enum MyError {} struct MyService { rx_term: mpsc::UnboundedReceiver<()> } #[async_trait] impl TokioServiceHandler for MyService { type AppErr = MyError; async fn init(&mut self, ictx: InitCtx) -> Result<(), Self::AppErr> { tracing::info!("Running TokioServiceHandler::init()"); ictx.report(Some("Initializing service handler".into())); // .. do initialization here .. ictx.report(Some("Finalized service handler initialization".into())); Ok(()) } async fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { tracing::info!("Running TokioServiceHandler::run()"); tracing::info!("Wait for termination event for up to 4 seconds .."); tokio::select! { _ = self.rx_term.recv() => { tracing::info!("Got kill signal from svc"); } () = time::sleep(Duration::from_secs(4)) => { tracing::info!("Got timeout"); } } Ok(()) } async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr> { tracing::info!("Running TokioServiceHandler::shutdown()"); tctx.report(Some("Terminating service handler".into())); // .. do termination here .. tctx.report(Some("Finalized service handler termination".into())); Ok(()) } } pub fn main() { let svcname = qsu::default_service_name().unwrap(); // Channel used to signal termination from service event handler to main // application let (tx, rx) = mpsc::unbounded_channel::<()>(); // Service event handler let svcevt_handler = move |msg| match msg { SvcEvt::Shutdown(_) => { tracing::debug!("Shutdown event from service subsystem"); // Once a Shutdown or Termination event has been received, signal to // the main service callback to terminate. let _ = tx.send(()); } SvcEvt::User(us) => match us { UserSig::Sig1 => { log::info!("User signal 1"); } UserSig::Sig2 => { log::info!("User signal 2"); } }, _ => {} }; // Set up main service runtime handler let svcrt = MyService { rx_term: rx }; // Define service application runtime components let apprt = SrvAppRt::Tokio { rtbldr: None, svcevt_handler: Box::new(svcevt_handler), rt_handler: Box::new(svcrt) }; // Launch service let rctx = RunCtx::new(&svcname); rctx.run(apprt).unwrap(); } } fn main() { #[cfg(all(feature = "rt", feature = "tokio"))] gate::main(); #[cfg(not(all(feature = "rt", feature = "tokio")))] println!("simpletokio example requires the rt and tokio features"); } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/argp.rs.
︙ | ︙ | |||
28 29 30 31 32 33 34 | //! application. It is possible to register custom subcommands, and the //! command line parser specification can be modified by the [`ArgsProc`] //! callbacks. use clap::{builder::Str, Arg, ArgAction, ArgMatches, Args, Command}; use crate::{ | | | > | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | //! application. It is possible to register custom subcommands, and the //! command line parser specification can be modified by the [`ArgsProc`] //! callbacks. use clap::{builder::Str, Arg, ArgAction, ArgMatches, Args, Command}; use crate::{ err::CbErr, installer::{self, RegSvc}, lumberjack::LogLevel, rt::{RunCtx, SrvAppRt} }; /// Modify a `clap` [`Command`] instance to accept common service management /// subcommands. /// /// If `inst_subcmd` is `Some()`, it should be the name of the subcommand used /// to register a service. If `rm_subcmd` is `Some()` it should be the name of /// the subcommand used to deregister a service. Similarly `run_subcmd` is /// used to add a subcommand for running the service under a service subsystem /// [where applicable]. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead. #[must_use] pub fn add_subcommands( cli: Command, svcname: &str, inst_subcmd: Option<&str>, rm_subcmd: Option<&str>, run_subcmd: Option<&str> ) -> Command { |
︙ | ︙ | |||
107 108 109 110 111 112 113 | /// Set an optional directory the service runtime should start in. #[arg(short, long, value_name = "DIR")] workdir: Option<String>, #[arg(long, value_enum, value_name = "LEVEL")] log_level: Option<LogLevel>, | | | | > > > > > > > > > > | 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | /// Set an optional directory the service runtime should start in. #[arg(short, long, value_name = "DIR")] workdir: Option<String>, #[arg(long, value_enum, value_name = "LEVEL")] log_level: Option<LogLevel>, #[arg(long, hide(true), value_name = "FILTER")] trace_filter: Option<String>, #[arg(long, value_enum, hide(true), value_name = "FNAME")] trace_file: Option<String>, /// Forcibly install service. /// /// On Windows, attempt to stop and uninstall service if it already exists. /// /// On macos/launchd and linux/systemd, overwrite the service specification /// file if it already exists. #[arg(long, short)] force: bool } /// Create a `clap` [`Command`] object that accepts service registration /// arguments. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead, but this call exists in case applications need finer grained /// control. #[must_use] pub fn mk_inst_cmd(cmd: &str, svcname: &str) -> Command { let namearg = Arg::new("svcname") .short('n') .long("name") .action(ArgAction::Set) .value_name("SVCNAME") .default_value(Str::from(svcname.to_string())) |
︙ | ︙ | |||
146 147 148 149 150 151 152 153 154 155 156 157 158 159 | /// Create a `clap` [`Command`] object that accepts service deregistration /// arguments. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead, but this call exists in case applications need finer grained /// control. pub fn mk_rm_cmd(cmd: &str, svcname: &str) -> Command { let namearg = Arg::new("svcname") .short('n') .long("name") .action(ArgAction::Set) .value_name("SVCNAME") .default_value(svcname.to_string()) | > | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | /// Create a `clap` [`Command`] object that accepts service deregistration /// arguments. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead, but this call exists in case applications need finer grained /// control. #[must_use] pub fn mk_rm_cmd(cmd: &str, svcname: &str) -> Command { let namearg = Arg::new("svcname") .short('n') .long("name") .action(ArgAction::Set) .value_name("SVCNAME") .default_value(svcname.to_string()) |
︙ | ︙ | |||
167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 | /// Parsed service deregistration arguments. pub struct DeregSvc { pub svcname: String } impl DeregSvc { pub fn from_cmd_match(matches: &ArgMatches) -> Self { let svcname = matches.get_one::<String>("svcname").unwrap().to_owned(); Self { svcname } } } /// Run service. #[derive(Debug, Args)] struct RunSvcArgs {} /// Create a `clap` [`Command`] object that accepts service running arguments. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead, but this call exists in case applications need finer grained /// control. pub fn mk_run_cmd(cmd: &str, svcname: &str) -> Command { let namearg = Arg::new("svcname") .short('n') .long("name") .action(ArgAction::Set) .value_name("SVCNAME") .default_value(svcname.to_string()) .help("Service name"); let cli = Command::new(cmd.to_string()).arg(namearg); RunSvcArgs::augment_args(cli) } | > > > > > > > | | > > > > > > > > > > > | > > > > > > > > > > | > > > > > > | > > > > | > > > > | > > > > | | | > | > > > | | > | | > > > > | | < < < | < > > | > > > > > > | | > > > > > > > > > > > > > > | > | > > > | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 | /// Parsed service deregistration arguments. pub struct DeregSvc { pub svcname: String } impl DeregSvc { #[allow( clippy::missing_panics_doc, reason = "svcname should always have been set" )] #[must_use] pub fn from_cmd_match(matches: &ArgMatches) -> Self { // unwrap() should be okay here, because svcname should always be set. let svcname = matches.get_one::<String>("svcname").unwrap().to_owned(); Self { svcname } } } /// Run service. #[derive(Debug, Args)] struct RunSvcArgs {} /// Create a `clap` [`Command`] object that accepts service running arguments. /// /// It is recommended that applications use the higher-level `ArgParser` /// instead, but this call exists in case applications need finer grained /// control. #[must_use] pub fn mk_run_cmd(cmd: &str, svcname: &str) -> Command { let namearg = Arg::new("svcname") .short('n') .long("name") .action(ArgAction::Set) .value_name("SVCNAME") .default_value(svcname.to_string()) .help("Service name"); let cli = Command::new(cmd.to_string()).arg(namearg); RunSvcArgs::augment_args(cli) } pub(crate) enum ArgpRes<'cb, ApEr> { /// Run server application. RunApp(RunCtx, &'cb mut dyn ArgsProc<AppErr = ApEr>), /// Nothing to do (service was probably registered/deregistred). Quit } /// Parsed service running arguments. pub struct RunSvc { pub svcname: String } impl RunSvc { #[allow( clippy::missing_panics_doc, reason = "svcname should always have been set" )] #[must_use] pub fn from_cmd_match(matches: &ArgMatches) -> Self { let svcname = matches.get_one::<String>("svcname").unwrap().to_owned(); Self { svcname } } } /// Used to differentiate between running without a subcommand, or the /// install/uninstall or run service subcommands. pub enum Cmd { Root, Inst, Rm, Run } /// Allow application to customise behavior of an [`ArgParser`] instance. pub trait ArgsProc { type AppErr; /// Give the application an opportunity to modify the root and subcommand /// `Command`s. /// /// `cmdtype` indicates whether `cmd` is the root `Command` or one of the /// subcommand `Command`s. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. #[allow(unused_variables)] fn conf_cmd( &mut self, cmdtype: Cmd, cmd: Command ) -> Result<Command, Self::AppErr> { Ok(cmd) } /// Callback allowing application to configure the service registration /// context just before the service is registered. /// /// This trait method can, among other things, be used by an application to: /// - Configure a service work directory. /// - Add environment variables. /// - Add command like arguments to the run command. /// /// The `sub_m` argument represents `clap`'s parsed subcommand context for /// the service registration subcommand. Applications that want to add /// custom arguments to the parser should implement the /// [`ArgsProc::conf_cmd()`] trait method and perform the subcommand /// augmentation there. /// /// This callback is called after the system-defined command line arguments /// have been parsed and populated into `regsvc`. Implementation should be /// careful not to overwrite settings that should be configurable via the /// command line, and can inspect the current value of fields before setting /// them if conditional reconfiguations are required. /// /// The default implementation does nothing but return `regsvc` unmodified. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. #[allow(unused_variables)] fn proc_inst( &mut self, sub_m: &ArgMatches, regsvc: RegSvc ) -> Result<RegSvc, Self::AppErr> { Ok(regsvc) } /// Callback allowing application to configure the service deregistration /// context just before the service is deregistered. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. #[allow(unused_variables)] fn proc_rm( &mut self, sub_m: &ArgMatches, deregsvc: DeregSvc ) -> Result<DeregSvc, Self::AppErr> { Ok(deregsvc) } /// Callback allowing application to configure the run context before /// launching the server application. /// /// qsu will have performed all its own initialization of the [`RunCtx`] /// before calling this function. /// /// The application can differentiate between running in a service mode and /// running as a foreground by calling [`RunCtx::is_service()`]. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. #[allow(unused_variables)] fn proc_run( &mut self, matches: &ArgMatches, runctx: RunCtx ) -> Result<RunCtx, Self::AppErr> { Ok(runctx) } /// Called when a subcommand is encountered that is not one of the three /// subcommands regognized by qsu. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. #[allow(unused_variables)] fn proc_other( &mut self, subcmd: &str, sub_m: &ArgMatches ) -> Result<(), Self::AppErr> { Ok(()) } /// Construct an server application runtime. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. fn build_apprt(&mut self) -> Result<SrvAppRt<Self::AppErr>, Self::AppErr>; } /// High-level argument parser. /// /// This is suitable for applications that follow a specific pattern: /// - It has subcommands for: /// - Registering a service /// - Deregistering a service /// - Running as a service /// - Running without any subcommands should run the server application as a /// foreground process. pub struct ArgParser<'cb, ApEr> { svcname: String, reg_subcmd: String, dereg_subcmd: String, run_subcmd: String, cli: Command, cb: &'cb mut dyn ArgsProc<AppErr = ApEr>, regcb: Option<Box<dyn FnOnce(RegSvc) -> RegSvc>> } impl<'cb, ApEr> ArgParser<'cb, ApEr> where ApEr: Send + 'static { /// Create a new argument parser. /// /// `svcname` is the _default_ service name. It may be overridden using /// command line arguments. pub fn new(svcname: &str, cb: &'cb mut dyn ArgsProc<AppErr = ApEr>) -> Self { let cli = Command::new(""); Self { svcname: svcname.to_string(), reg_subcmd: "register-service".into(), dereg_subcmd: "deregister-service".into(), run_subcmd: "run-service".into(), cli, cb, regcb: None } } /// Create a new argument parser, basing the root command parser on an /// application-supplied `Command`. /// /// `svcname` is the _default_ service name. It may be overridden using /// command line arguments. pub fn with_cmd( svcname: &str, cli: Command, cb: &'cb mut dyn ArgsProc<AppErr = ApEr> ) -> Self { Self { svcname: svcname.to_string(), reg_subcmd: "register-service".into(), dereg_subcmd: "deregister-service".into(), run_subcmd: "run-service".into(), cli, cb, regcb: None } } /// Rename the service registration subcommand. #[must_use] pub fn reg_subcmd(mut self, nm: &str) -> Self { self.reg_subcmd = nm.to_string(); self } /// Rename the service deregistration subcommand. #[must_use] pub fn dereg_subcmd(mut self, nm: &str) -> Self { self.dereg_subcmd = nm.to_string(); self } /// Rename the subcommand for running the service. #[must_use] pub fn run_subcmd(mut self, nm: &str) -> Self { self.run_subcmd = nm.to_string(); self } fn inner_proc(self) -> Result<ArgpRes<'cb, ApEr>, CbErr<ApEr>> { let matches = match self.cli.try_get_matches() { Ok(m) => m, Err(e) => match e.kind() { clap::error::ErrorKind::DisplayHelp | clap::error::ErrorKind::DisplayVersion => { e.exit(); } _ => { // ToDo: Convert error to Error::ArgP, pass along the error type so // that the Termination handler can output the specific error. //Err(e)?; e.exit(); } } }; match matches.subcommand() { Some((subcmd, sub_m)) if subcmd == self.reg_subcmd => { //println!("{:#?}", sub_m); // Convert known register-service command line arguments to a RegSvc // context. let mut regsvc = RegSvc::from_cmd_match(sub_m); // To trigger the server to run in service mode, run with the // subcommand "run-service" (or whatever it has been changed to). // // We're making a pretty safe assumption that the service will // be run though qsu's argument processor because it is being // registered using it. // // If the service name is different that the name drived from the // executable's name, then add "--name <svcname>" arguments. let mut args = vec![String::from(&self.run_subcmd)]; if regsvc.svcname() != self.svcname { args.push(String::from("--name")); args.push(regsvc.svcname().to_string()); } regsvc.args_ref(args); // Call application call-back, to allow application-specific // service configuration. // // This is a good place to stick custom environment, arguments, // registry changes that may depend on command line arguments. let regsvc = self .cb .proc_inst(sub_m, regsvc) .map_err(|ae| CbErr::App(ae))?; // Give the application a final chance to modify the service // registration parameters. // // This can be used to set hardcoded parameters like service display // name, description, etc. let regsvc = if let Some(cb) = self.regcb { cb(regsvc) } else { regsvc }; // Register the service with the operating system's service subsystem. regsvc.register()?; Ok(ArgpRes::Quit) } Some((subcmd, sub_m)) if subcmd == self.dereg_subcmd => { // Get arguments relating to service deregistration. let args = DeregSvc::from_cmd_match(sub_m); let args = self.cb.proc_rm(sub_m, args).map_err(|ae| CbErr::App(ae))?; installer::uninstall(&args.svcname)?; Ok(ArgpRes::Quit) } Some((subcmd, sub_m)) if subcmd == self.run_subcmd => { // Get arguments relating to running the service. let args = RunSvc::from_cmd_match(sub_m); // Create a RunCtx, mark it as a service, and allow application the // opportunity to modify it based on the parsed command line. let rctx = RunCtx::new(&args.svcname).service(); let rctx = self.cb.proc_run(sub_m, rctx).map_err(|ae| CbErr::App(ae))?; // Return a run context for a background service process. Ok(ArgpRes::RunApp(rctx, self.cb)) } Some((subcmd, sub_m)) => { // Call application callback for processing "other" subcmd self .cb .proc_other(subcmd, sub_m) .map_err(|ae| CbErr::App(ae))?; // Return a run context for a background service process. Ok(ArgpRes::Quit) } _ => { // Create a RunCtx, mark it as a service, and allow application the // opportunity to modify it based on the parsed command line. let rctx = RunCtx::new(&self.svcname); let rctx = self .cb .proc_run(&matches, rctx) .map_err(|ae| CbErr::App(ae))?; // Return a run context for a foreground process. Ok(ArgpRes::RunApp(rctx, self.cb)) } } } /// Register a closure that will be called before service registration. /// /// This callback serves a different purpose than implementing the /// [`ArgsProc::proc_inst()`] method, which is primarily meant to tranlate /// command line arguments to service registration parameters. /// /// The `regsvc_proc()` callback on the other hand is called just before /// actual registration and is intended to overwrite service registration /// parametmer. It is suitable to use this callback to set hard-coded /// application-specific service parameters, like service display name, /// description, dependencies, and other parameters that the user should /// not need to specify manually. /// /// Just as with `ArgsProc::proc_inst()` this callback is called after the /// system-defined command line arguments have been parsed and populated /// into the [`RegSvc`] context. Closures should be careful not to overwrite /// settings that should be configurable via the command line, and can /// inspect the current value of fields before setting them if conditional /// reconfiguations are required. #[must_use] pub fn regsvc_proc( mut self, f: impl FnOnce(RegSvc) -> RegSvc + 'static ) -> Self { self.regcb = Some(Box::new(f)); self } /// Process command line arguments. /// /// Calling this method will initialize the command line parser, parse the /// command line, using the associated [`ArgsProc`] as appropriate to modify /// the argument parser, and then take the appropriate action: /// |
︙ | ︙ | |||
505 506 507 508 509 510 511 | /// same one that will run the service. /// - Add the "run service" subcommand to the service's command line /// arguments. /// - If the specified service name is different than the default service /// name (determined by /// [`default_service_name()`](crate::default_service_name), then the /// aguments `--name <service name>` will be added. | > > > > | > > | > > > | > | > > > | > > > | | 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 | /// same one that will run the service. /// - Add the "run service" subcommand to the service's command line /// arguments. /// - If the specified service name is different than the default service /// name (determined by /// [`default_service_name()`](crate::default_service_name), then the /// aguments `--name <service name>` will be added. /// /// # Errors /// Application-defined error will be returned as `CbErr::Aop` to the /// original caller. pub fn proc(mut self) -> Result<(), CbErr<ApEr>> { // Give application the opportunity to modify root Command self.cli = self .cb .conf_cmd(Cmd::Root, self.cli) .map_err(|ae| CbErr::App(ae))?; // Create registration subcommand and give application the opportunity to // modify the subcommand's Command let sub = mk_inst_cmd(&self.reg_subcmd, &self.svcname); let sub = self .cb .conf_cmd(Cmd::Inst, sub) .map_err(|ae| CbErr::App(ae))?; self.cli = self.cli.subcommand(sub); // Create deregistration subcommand let sub = mk_rm_cmd(&self.dereg_subcmd, &self.svcname); let sub = self .cb .conf_cmd(Cmd::Rm, sub) .map_err(|ae| CbErr::App(ae))?; self.cli = self.cli.subcommand(sub); // Create run subcommand let sub = mk_run_cmd(&self.run_subcmd, &self.svcname); let sub = self .cb .conf_cmd(Cmd::Run, sub) .map_err(|ae| CbErr::App(ae))?; self.cli = self.cli.subcommand(sub); // Parse command line arguments. Run the service application if requiested // to do so. if let ArgpRes::RunApp(runctx, cb) = self.inner_proc()? { // Argument parser asked us to run, so call the application to ask it to // create the service handler, and then kick off the service runtime. //let st = bldr(ctx)?; let st = cb.build_apprt().map_err(|ae| CbErr::App(ae))?; runctx.run(st)?; } Ok(()) } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/err.rs.
1 2 | use std::{fmt, io}; | < < < | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | use std::{fmt, io}; #[derive(Debug)] pub enum ArgsError { #[cfg(feature = "clap")] Clap(clap::Error), Msg(String) } /// Indicate failure for server applicatino callbacks. #[derive(Debug, Default)] pub struct AppErrors<ApEr> { pub init: Option<ApEr>, pub run: Option<ApEr>, pub shutdown: Option<ApEr> } impl<ApEr> AppErrors<ApEr> { pub const fn init_failed(&self) -> bool { self.init.is_some() } pub const fn run_failed(&self) -> bool { self.run.is_some() } pub const fn shutdown_failed(&self) -> bool { self.shutdown.is_some() } } /// Errors that can be returned from functions that call application callbacks. #[derive(Debug)] #[allow(clippy::module_name_repetitions)] pub enum CbErr<ApEr> { /// An qsu library error was generated. Lib(Error), /// Application-defined error. /// /// Applications can use this variant to pass application-specific errors /// through the runtime back to itself. App(ApEr), /// Returned by [`RunCtx::run()`](crate::rt::RunCtx) to indicate which /// server application callbacks that failed. #[cfg(feature = "rt")] #[cfg_attr(docsrs, doc(cfg(feature = "rt")))] SrvApp(AppErrors<ApEr>) } impl<ApEr: fmt::Debug> std::error::Error for CbErr<ApEr> {} impl<ApEr> fmt::Display for CbErr<ApEr> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Self::Lib(e) => e.fmt(f), Self::App(_ae) => { // ToDo: Add ApErr: Error bound and forward the call to it write!(f, "Application-defined error") } #[cfg(feature = "rt")] Self::SrvApp(e) => { let mut v = vec![]; if e.init.is_some() { v.push("init"); } if e.run.is_some() { v.push("run"); } if e.shutdown.is_some() { v.push("shutdown"); } write!(f, "Server application failed [{}]", v.join(",")) } } } } impl<ApEr> CbErr<ApEr> { /// Extract application-specific error. /// /// # Panics /// The error must be [`CbErr::App`] of this method will panic. pub fn unwrap_apperr(self) -> ApEr { let Self::App(ae) = self else { panic!("Not CbErr::App()"); }; ae } } impl<ApEr> From<Error> for CbErr<ApEr> { fn from(err: Error) -> Self { Self::Lib(err) } } #[cfg(feature = "rocket")] impl<ApEr> From<rocket::Error> for CbErr<ApEr> { fn from(err: rocket::Error) -> Self { Self::Lib(Error::Rocket(err.to_string())) } } impl<ApEr> From<std::io::Error> for CbErr<ApEr> { fn from(err: std::io::Error) -> Self { Self::Lib(Error::IO(err.to_string())) } } /// Errors that qsu will return to application. #[derive(Debug)] pub enum Error { ArgP(ArgsError), BadFormat(String), Internal(String), IO(String), /// An error related to logging occurred. /// |
︙ | ︙ | |||
60 61 62 63 64 65 66 | /// Rocket-specific errors. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] Rocket(String), SubSystem(String), | < < < < < < > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | | | | | | | | > < < < | | < | < < | < < | < < | < < | < < | < < | < < < < < < < < < < < < < < | | | | | | | | | > > | > | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | /// Rocket-specific errors. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] Rocket(String), SubSystem(String), Unsupported } #[allow(clippy::needless_pass_by_value)] impl Error { pub fn bad_format(s: impl ToString) -> Self { Self::BadFormat(s.to_string()) } pub fn internal(s: impl ToString) -> Self { Self::Internal(s.to_string()) } pub fn io(s: impl ToString) -> Self { Self::IO(s.to_string()) } pub fn lumberjack(s: impl ToString) -> Self { Self::LumberJack(s.to_string()) } pub fn missing(s: impl ToString) -> Self { Self::Missing(s.to_string()) } } impl std::error::Error for Error {} impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Self::ArgP(s) => { // ToDo: Handle the ArgsError::Clap and ArgsError::Msg differently write!(f, "Argument parser; {s:?}") } Self::BadFormat(s) => write!(f, "Bad format error; {s}"), Self::Internal(s) => write!(f, "Internal error; {s}"), Self::IO(s) => write!(f, "I/O error; {s}"), Self::LumberJack(s) => write!(f, "LumberJack error; {s}"), Self::Missing(s) => write!(f, "Missing data; {s}"), #[cfg(feature = "rocket")] Self::Rocket(s) => write!(f, "Rocket error; {s}"), Self::SubSystem(s) => write!(f, "Service subsystem error; {s}"), Self::Unsupported => { write!(f, "Operation is unsupported [on this platform]") } } } } /* #[cfg(feature = "clap")] impl From<clap::error::Error> for Error { fn from(err: clap::error::Error) -> Self { Error::ArgP(err.to_string()) } } */ #[cfg(windows)] impl From<eventlog::InitError> for Error { /// Map eventlog initialization errors to `Error::LumberJack`. fn from(err: eventlog::InitError) -> Self { Self::LumberJack(err.to_string()) } } #[cfg(windows)] impl From<eventlog::Error> for Error { /// Map eventlog errors to `Error::LumberJack`. fn from(err: eventlog::Error) -> Self { Self::LumberJack(err.to_string()) } } impl From<io::Error> for Error { fn from(err: io::Error) -> Self { Self::IO(err.to_string()) } } #[cfg(windows)] impl From<registry::key::Error> for Error { fn from(err: registry::key::Error) -> Self { Self::SubSystem(err.to_string()) } } #[cfg(feature = "rocket")] impl From<rocket::Error> for Error { fn from(err: rocket::Error) -> Self { Self::Rocket(err.to_string()) } } #[cfg(feature = "installer")] impl From<sidoc::Error> for Error { fn from(err: sidoc::Error) -> Self { Self::Internal(err.to_string()) } } #[cfg(windows)] impl From<windows_service::Error> for Error { fn from(err: windows_service::Error) -> Self { Self::SubSystem(err.to_string()) } } /* impl<ApEr> From<ApEr> for Error<ApEr> { /// Wrap an [`AppErr`] in an [`Error`]. fn from(err: ApEr) -> Self { Error::App(err) } } */ // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/installer.rs.
︙ | ︙ | |||
17 18 19 20 21 22 23 24 25 26 27 28 29 30 | #[cfg(feature = "clap")] use clap::ArgMatches; use itertools::Itertools; use crate::{err::Error, lumberjack::LogLevel}; /* #[cfg(any( target_os = "macos", all(target_os = "linux", feature = "systemd") ))] pub enum InstallDir { | > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | #[cfg(feature = "clap")] use clap::ArgMatches; use itertools::Itertools; use crate::{err::Error, lumberjack::LogLevel}; /* #[cfg(any( target_os = "macos", all(target_os = "linux", feature = "systemd") ))] pub enum InstallDir { |
︙ | ︙ | |||
143 144 145 146 147 148 149 150 151 152 | #[cfg(any( target_os = "macos", all(target_os = "linux", feature = "systemd") ))] umask: Option<String> } pub struct RegSvc { pub svcname: String, | > > > > > > > > > > > > > > > > > > | > > | > > > > > > > > | < | > > > > > < < > > > > | > > > > | < > > > > > > > > > | < < < < < < > | > > < < > > | | > > > | > > > > > > > > > > | > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > | | < | | | < > > | > | > > > > > > > > | > | > > > | > > > > > > > > > > > > > > > > > > > | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 | #[cfg(any( target_os = "macos", all(target_os = "linux", feature = "systemd") ))] umask: Option<String> } #[cfg(windows)] pub type BoxRegCb = Box<dyn FnOnce(&str, &mut winreg::RegKey) -> Result<(), Error>>; #[allow(clippy::struct_excessive_bools)] pub struct RegSvc { /// If `true`, then attempt to forcibly install service. pub force: bool, /// Set to `true` if this service uses the qsu service argument parser. /// /// This will ensure that `run-service` is the first argument passed to the /// service executable. pub qsu_argp: bool, pub svcname: String, /// Service's display name. /// /// Only used on Windows. pub display_name: Option<String>, /// Service's description. /// /// Only used on Windows and on linux/systemd. pub description: Option<String>, /// Set to `true` if service supports configuration reloading. pub conf_reload: bool, /// Set to `true` if this is a network service. /// /// Note that this does not magically solve startup dependencies. pub netservice: bool, #[cfg(windows)] pub regconf: Option<BoxRegCb>, /// Command line arguments. pub args: Vec<String>, /// Environment variables. pub envs: Vec<(String, String)>, /// Set service to auto-start. /// /// By default the service will be registered, but needs to be started /// manually. pub autostart: bool, pub(crate) workdir: Option<String>, /// List of service dependencies. deps: Vec<Depend>, log_level: Option<LogLevel>, trace_filter: Option<String>, trace_file: Option<String>, runas: RunAs } pub enum Depend { Network, Custom(Vec<String>) } impl RegSvc { #[must_use] pub fn new(svcname: &str) -> Self { Self { force: false, qsu_argp: false, svcname: svcname.to_string(), display_name: None, description: None, conf_reload: false, netservice: false, #[cfg(windows)] regconf: None, args: Vec::new(), envs: Vec::new(), autostart: false, workdir: None, deps: Vec::new(), log_level: None, trace_filter: None, trace_file: None, runas: RunAs::default() } } #[cfg(feature = "clap")] #[allow(clippy::missing_panics_doc)] pub fn from_cmd_match(matches: &ArgMatches) -> Self { let force = matches.get_flag("force"); // unwrap should be okay, because svcname is mandatory let svcname = matches.get_one::<String>("svcname").unwrap().to_owned(); let autostart = matches.get_flag("auto_start"); let dispname = matches.get_one::<String>("display_name"); let descr = matches.get_one::<String>("description"); let args: Vec<String> = matches .get_many::<String>("arg") .map_or_else(Vec::new, |vr| vr.map(String::from).collect()); let envs: Vec<String> = matches .get_many::<String>("env") .map_or_else(Vec::new, |vr| vr.map(String::from).collect()); /* if let Some(vr) = matches.get_many::<String>("env") { vr.map(String::from).collect() } else { Vec::new() }; */ let workdir = matches.get_one::<String>("workdir"); let mut environ = Vec::new(); let mut it = envs.into_iter(); while let Some((key, value)) = it.next_tuple() { environ.push((key, value)); } let log_level = matches.get_one::<LogLevel>("log_level").copied(); let trace_filter = matches.get_one::<String>("trace_filter").cloned(); let trace_file = matches.get_one::<String>("trace_file").cloned(); let runas = RunAs::default(); Self { force, qsu_argp: true, svcname, display_name: dispname.cloned(), description: descr.cloned(), conf_reload: false, netservice: false, #[cfg(windows)] regconf: None, args, envs: environ, autostart, workdir: workdir.cloned(), deps: Vec::new(), log_level, trace_filter, trace_file, runas } } #[must_use] pub fn svcname(&self) -> &str { &self.svcname } /// Set the service's display name. /// /// This only has an effect on Windows. #[must_use] pub fn display_name(mut self, name: impl ToString) -> Self { self.display_name_ref(name); self } /// Set the service's _display name_. /// /// This only has an effect on Windows. #[allow(clippy::needless_pass_by_value)] pub fn display_name_ref(&mut self, name: impl ToString) -> &mut Self { self.display_name = Some(name.to_string()); self } /// Set the service's description. /// /// This only has an effect on Windows and linux/systemd. #[must_use] pub fn description(mut self, text: impl ToString) -> Self { self.description_ref(text); self } /// Set the service's description. /// /// This only has an effect on Windows and linux/systemd. #[allow(clippy::needless_pass_by_value)] pub fn description_ref(&mut self, text: impl ToString) -> &mut Self { self.description = Some(text.to_string()); self } /// Mark service as able to live reload its configuration. #[must_use] pub fn conf_reload(mut self) -> Self { self.conf_reload_ref(); self } /// Mark service as able to live reload its configuration. pub fn conf_reload_ref(&mut self) -> &mut Self { self.conf_reload = true; self } /// Mark service as a network application. /// /// # Windows /// Calling this will implicitly add a `Tcpip` service dependency. #[must_use] pub fn netservice(mut self) -> Self { self.netservice_ref(); self } /// Mark service as a network application. /// /// # Windows /// Calling this will implicitly add a `Tcpip` service dependency. pub fn netservice_ref(&mut self) -> &mut Self { self.netservice = true; #[cfg(windows)] self.deps.push(Depend::Network); self } /// Register a callback that will be used to set service registry keys. #[cfg(windows)] #[cfg_attr(docsrs, doc(cfg(windows)))] #[must_use] pub fn regconf<F>(mut self, f: F) -> Self where F: FnOnce(&str, &mut winreg::RegKey) -> Result<(), Error> + 'static { self.regconf = Some(Box::new(f)); self } /// Register a callback that will be used to set service registry keys. #[cfg(windows)] #[cfg_attr(docsrs, doc(cfg(windows)))] pub fn regconf_ref<F>(&mut self, f: F) -> &mut Self where F: FnOnce(&str, &mut winreg::RegKey) -> Result<(), Error> + 'static { self.regconf = Some(Box::new(f)); self } /// Append a service command line argument. #[allow(clippy::needless_pass_by_value)] #[must_use] pub fn arg(mut self, arg: impl ToString) -> Self { self.args.push(arg.to_string()); self } /// Append a service command line argument. #[allow(clippy::needless_pass_by_value)] pub fn arg_ref(&mut self, arg: impl ToString) -> &mut Self { self.args.push(arg.to_string()); self } /// Append service command line arguments. #[must_use] pub fn args<I, S>(mut self, args: I) -> Self where I: IntoIterator<Item = S>, S: ToString { for arg in args { self.args.push(arg.to_string()); } self } /// Append service command line arguments. pub fn args_ref<I, S>(&mut self, args: I) -> &mut Self where I: IntoIterator<Item = S>, S: ToString { for arg in args { self.arg_ref(arg.to_string()); } self } #[must_use] pub fn have_args(&self) -> bool { !self.args.is_empty() } /// Add a service environment variable. #[allow(clippy::needless_pass_by_value)] #[must_use] pub fn env<K, V>(mut self, key: K, val: V) -> Self where K: ToString, V: ToString { self.envs.push((key.to_string(), val.to_string())); self } /// Add a service environment variable. #[allow(clippy::needless_pass_by_value)] pub fn env_ref<K, V>(&mut self, key: K, val: V) -> &mut Self where K: ToString, V: ToString { self.envs.push((key.to_string(), val.to_string())); self } /// Add service environment variables. #[must_use] pub fn envs<I, K, V>(mut self, envs: I) -> Self where I: IntoIterator<Item = (K, V)>, K: ToString, V: ToString { for (key, val) in envs { self.envs.push((key.to_string(), val.to_string())); } self } /// Add service environment variables. pub fn envs_ref<I, K, V>(&mut self, args: I) -> &mut Self where I: IntoIterator<Item = (K, V)>, K: ToString, V: ToString { for (key, val) in args { self.env_ref(key.to_string(), val.to_string()); } self } #[must_use] pub fn have_envs(&self) -> bool { !self.envs.is_empty() } /// Mark service to auto-start on boot. #[must_use] pub const fn autostart(mut self) -> Self { self.autostart = true; self } /// Mark service to auto-start on boot. pub fn autostart_ref(&mut self) -> &mut Self { self.autostart = true; self } /// Sets the work directory that the service should start in. /// /// This is a utf-8 string rather than a `Path` or `PathBuf` because the /// directory tends to end up in places that have an utf-8 constraint. #[allow(clippy::needless_pass_by_value)] #[must_use] pub fn workdir(mut self, workdir: impl ToString) -> Self { self.workdir = Some(workdir.to_string()); self } /// In-place version of [`Self::workdir()`]. #[allow(clippy::needless_pass_by_value)] pub fn workdir_ref(&mut self, workdir: impl ToString) -> &mut Self { self.workdir = Some(workdir.to_string()); self } /// Add a service dependency. /// /// Has no effect on macos. #[must_use] pub fn depend(mut self, dep: Depend) -> Self { self.deps.push(dep); self } /// Add a service dependency. /// /// Has no effect on macos. pub fn depend_ref(&mut self, dep: Depend) -> &mut Self { self.deps.push(dep); self } /// Perform the service registration. /// /// # Errors /// The error may be system/service subsystem specific. pub fn register(self) -> Result<(), Error> { #[cfg(windows)] winsvc::install(self)?; #[cfg(target_os = "macos")] launchd::install(self)?; #[cfg(all(target_os = "linux", feature = "systemd"))] systemd::install(self)?; Ok(()) } } /// Deregister a service from a service subsystem. /// /// # Errors /// The error may be system/service subsystem specific. #[allow(unreachable_code)] pub fn uninstall(svcname: &str) -> Result<(), Error> { #[cfg(windows)] { winsvc::uninstall(svcname)?; return Ok(()); } |
︙ | ︙ |
Changes to src/installer/launchd.rs.
|
| | | > > > > > | | | | | | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | use std::{fs, io::ErrorKind, path::Path, sync::Arc}; use sidoc::{Builder, RenderContext}; use super::Error; /// Generate a plist file for running service under launchd. /// /// # Errors /// [`Error::IO`] will be returned if file I/O failed when trying to generate /// plist file. pub fn install(ctx: super::RegSvc) -> Result<(), Error> { let mut bldr = Builder::new(); bldr.line(r#"<?xml version="1.0" encoding="UTF-8"?>"#); bldr.line(r#"<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">"#); bldr.scope(r#"<plist version="1.0">"#, Some("</plist>")); bldr.scope("<dict>", Some("</dict")); // Use name "local.<svcname>" for now bldr.line("<key>Label</key>"); bldr.line(format!("<string>{}</string>", ctx.svcname())); let service_binary_path = ::std::env::current_exe()? .to_str() .ok_or_else(|| Error::bad_format("Executable pathname is not utf-8"))? .to_string(); if let Some(ref username) = ctx.runas.user { bldr.line("<key>UserName</key>"); bldr.line(format!("<string>{username}</string>")); } if let Some(ref groupname) = ctx.runas.group { bldr.line("<key>GroupName</key>"); bldr.line(format!("<string>{groupname}</string>")); } if ctx.runas.initgroups { bldr.line("<key>InitGroups</key>"); bldr.line("<true/>"); } if let Some(ref umask) = ctx.runas.umask { bldr.line("<key>Umask</key>"); bldr.line(format!("<integer>{umask}</integer>")); } if ctx.have_args() { bldr.line("<key>ProgramArguments</key>"); bldr.scope("<array>", Some("</array")); bldr.line(format!("<string>{service_binary_path}</string>")); for arg in &ctx.args { bldr.line(format!("<string>{arg}</string>")); } bldr.exit(); // <array> } else { bldr.line("<key>Program</key>"); bldr.line(format!("<string>{service_binary_path}</string>")); } let mut envs = Vec::new(); if let Some(ll) = ctx.log_level { envs.push((String::from("LOG_LEVEL"), ll.to_string())); } if let Some(ref tf) = ctx.trace_filter { envs.push((String::from("TRACE_FILTER"), tf.to_string())); } if let Some(ref fname) = ctx.trace_file { envs.push((String::from("TRACE_FILE"), fname.to_string())); } if ctx.have_envs() { for (key, value) in &ctx.envs { envs.push((key.to_string(), value.to_string())); } } if !envs.is_empty() { bldr.line("<key>EnvironmentVariables</key>"); bldr.scope("<dict>", Some("</dict")); for (key, value) in envs { bldr.line(format!("<key>{key}</key>")); bldr.line(format!("<string>{value}</string>")); } bldr.exit(); // <dict> } if let Some(wd) = ctx.workdir { bldr.line("<key>WorkingDirectory</key>"); bldr.line(format!("<string>{wd}</string>")); } if ctx.autostart { bldr.line("<key>RunAtLoad</key>"); bldr.line("<true/>"); } |
︙ | ︙ | |||
107 108 109 110 111 112 113 | // Render the output let buf = r.render("root")?; // ToDo: Set proper path let fname = format!("{}.plist", ctx.svcname); let fname = Path::new(&fname); | < < | | > > > > > | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | // Render the output let buf = r.render("root")?; // ToDo: Set proper path let fname = format!("{}.plist", ctx.svcname); let fname = Path::new(&fname); if fname.exists() && !ctx.force { Err(Error::IO( std::io::Error::new(ErrorKind::AlreadyExists, "File already exists") .to_string() ))?; } fs::write(fname, buf)?; Ok(()) } #[expect(clippy::missing_errors_doc)] #[expect(clippy::missing_const_for_fn)] pub fn uninstall(_svcname: &str) -> Result<(), Error> { Ok(()) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/installer/systemd.rs.
1 2 3 4 5 6 7 | use std::{fs, path::Path}; use crate::err::Error; pub fn install(ctx: super::RegSvc) -> Result<(), Error> { let service_binary_path = ::std::env::current_exe()? .to_str() | > > > > > | | > > > > > > > > > > > > > > | | > | | | > > > > > > | | | | | | > > > > > > > > > | | | | > | | | | > | > | | | > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | use std::{fs, path::Path}; use crate::err::Error; /// Generate a systemd service unit file. /// /// # Errors /// [`Error::IO`] will be returned if file I/O failed when trying to generate /// unit file. pub fn install(ctx: super::RegSvc) -> Result<(), Error> { let service_binary_path = ::std::env::current_exe()? .to_str() .ok_or_else(|| Error::bad_format("Executable pathname is not utf-8"))? .to_string(); // // [Unit] // let mut unit_lines: Vec<String> = vec![]; unit_lines.push("[Unit]".into()); if let Some(ref desc) = ctx.description { unit_lines.push(format!("Description={desc}")); } if ctx.netservice { // Note: The After=network.target does _not_ enure that the network is up // and running before starting the service. It's more suited to // ensuring that the service gets _shut down_ before the network is. // // Trying to use the service subsystem to wait for the network is a bad // idea -- the service application should do it instead. // // See: https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ unit_lines.push("After=network.target".into()); } // // [Service] // let mut svc_lines: Vec<String> = vec![]; svc_lines.push("[Service]".into()); if ctx.conf_reload { svc_lines.push("Type=notify-reload".into()); } else { svc_lines.push("Type=notify".into()); } if let Some(ref username) = ctx.runas.user { svc_lines.push(format!(r#"User="{username}""#)); } if let Some(ref groupname) = ctx.runas.group { svc_lines.push(format!(r#"Group="{groupname}""#)); } if let Some(ref umask) = ctx.runas.umask { svc_lines.push(format!(r#"UMask="{umask}""#)); } /* if ctx.restart_on_failure { svc_lines.push(format!("Restart=on-failure")); } */ if let Some(ll) = ctx.log_level { svc_lines.push(format!(r#"Environment="LOG_LEVEL={ll}""#)); } if let Some(tf) = ctx.trace_filter { svc_lines.push(format!(r#"Environment="TRACE_FILTER={tf}""#)); } if let Some(fname) = ctx.trace_file { svc_lines.push(format!(r#"Environment="TRACE_FILE={fname}""#)); } for (key, value) in &ctx.envs { svc_lines.push(format!(r#"Environment="{key}={value}""#)); } if let Some(wd) = ctx.workdir { svc_lines.push(format!("WorkingDirectory={wd}")); } // // Set up service executable and command line arguments. // // let mut cmdline = vec![service_binary_path]; for arg in &ctx.args { cmdline.push(arg.clone()); } svc_lines.push(format!("ExecStart={}", cmdline.join(" "))); // // [Install] // let inst_lines = [ String::from("[Install]"), String::from("WantedBy=multi-user.target") ]; // // Putting it all together // let blocks = [ unit_lines.join("\n"), svc_lines.join("\n"), inst_lines.join("\n") ]; let mut filebuf = blocks.join("\n\n"); filebuf.push('\n'); // ToDo: Set proper path let fname = format!("{}.service", ctx.svcname); let fname = Path::new(&fname); // ToDo: If file already exists, should it be assumed that the service needs // to be stopped? if fname.exists() && !ctx.force { Err(Error::io("File already exists."))?; } fs::write(fname, filebuf)?; /* if install_location == System { "systemctl daemon-reload" if autorun { "systemctl enable {}.service" } } */ Ok(()) } #[expect(clippy::missing_errors_doc)] #[expect(clippy::missing_const_for_fn)] pub fn uninstall(_svcname: &str) -> Result<(), Error> { /* if install_location == System { systemctl stop {}.service rm {}.service systemctl daemon-reload } */ Ok(()) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/installer/winsvc.rs.
1 2 3 4 5 6 7 8 9 10 | use std::{cell::RefCell, ffi::OsString, thread, time::Duration}; use windows_service::{ service::{ ServiceAccess, ServiceDependency, ServiceErrorControl, ServiceInfo, ServiceStartType, ServiceState, ServiceType }, service_manager::{ServiceManager, ServiceManagerAccess} }; | | < < | | | > > > > > | < | | < | | | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | use std::{cell::RefCell, ffi::OsString, thread, time::Duration}; use windows_service::{ service::{ ServiceAccess, ServiceDependency, ServiceErrorControl, ServiceInfo, ServiceStartType, ServiceState, ServiceType }, service_manager::{ServiceManager, ServiceManagerAccess} }; use crate::rt::winsvc::{ create_service_params, get_service_params_subkey, write_service_subkey }; use super::Error; /// Register a service in the system service's subsystem. /// /// # Errors /// Problems encountered in the Windows service subsystem will be reported as /// [`Error::SubSystem`]. // ToDo: Make notes about Windows-specific semantics: // - Uses registry // - Installer // - Windows Event Log pub fn install(ctx: super::RegSvc) -> Result<(), Error> { let svcname = &ctx.svcname; // Create a refrence cell used to keep track of whether to keep system // motifications (or not) when leaving function. let status = RefCell::new(false); // Register an event source named by the service name. eventlog::register(svcname)?; // The event source registration was successful and is a persistent change. // If this function returns early due to an error we want to roll back the // changes it made up to that point. This scope guard is used to deregister // the event source of the function returns early. let _status = scopeguard::guard(&status, |st| { if !*st.borrow() && eventlog::deregister(svcname).is_err() { eprintln!("!!> Unable to deregister event source"); } }); let manager_access = ServiceManagerAccess::CONNECT | ServiceManagerAccess::CREATE_SERVICE; let service_manager = ServiceManager::local_computer(None::<&str>, manager_access)?; let service_binary_path = ::std::env::current_exe()?; // Default display name to service name let display_name = ctx .display_name .as_ref() .map_or(svcname, |display_name| display_name); // // Generate service launch arguments // let launch_args: Vec<OsString> = ctx.args.iter().map(OsString::from).collect(); |
︙ | ︙ | |||
78 79 80 81 82 83 84 | super::Depend::Custom(lst) => { for d in lst { deps.push(OsString::from(d)); } } } } | > | < | | > > | | | < < | < | | | | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | super::Depend::Custom(lst) => { for d in lst { deps.push(OsString::from(d)); } } } } let dependencies: Vec<ServiceDependency> = deps.into_iter().map(ServiceDependency::Service).collect(); // If None, then run as System let account_name = ctx.runas.user.clone().map(OsString::from); let service_info = ServiceInfo { name: OsString::from(svcname), display_name: OsString::from(display_name), service_type: ServiceType::OWN_PROCESS, start_type: autostart, error_control: ServiceErrorControl::Normal, executable_path: service_binary_path, launch_arguments: launch_args, dependencies, account_name, account_password: None }; //println!("==> Registering service '{}' ..", svcname); let service = service_manager .create_service(&service_info, ServiceAccess::CHANGE_CONFIG)?; let _status = scopeguard::guard(&status, |st| { if !(*st.borrow()) { let service_access = ServiceAccess::DELETE; let res = service_manager.open_service(svcname, service_access); if let Ok(service) = res { if service.delete().is_err() { eprintln!("!!> Unable to delete service"); } } else { eprintln!("!!> Unable to open registered service"); } } }); if let Some(ref desc) = ctx.description { service.set_description(desc)?; } if ctx.have_envs() { let key = write_service_subkey(svcname)?; let envs: Vec<String> = ctx.envs.iter().map(|(k, v)| format!("{k}={v}")).collect(); key.set_value("Environment", &envs)?; } //println!("==> Service installation successful"); let mut params = create_service_params(svcname)?; // Just so the uninstaller will accept this service params.set_value("Installer", &"qsu")?; if let Some(wd) = ctx.workdir { params.set_value("WorkDir", &wd)?; } if let Some(ll) = ctx.log_level { params.set_value("LogLevel", &ll.to_string())?; } if let Some(ll) = ctx.trace_filter { params.set_value("TraceFilter", &ll)?; } if let Some(fname) = ctx.trace_file { params.set_value("TraceFile", &fname)?; } // Give application the opportunity to create registry keys. if let Some(cb) = ctx.regconf { cb(svcname, &mut params)?; } |
︙ | ︙ | |||
171 172 173 174 175 176 177 178 179 180 181 182 183 184 | /// Uninstalling the wrong service can have spectacularly bad side-effects, so /// qsu goes to some lengths to ensure that only services it installed itself /// can be uninstalled. /// /// Before attempting an actual uninstallation, this function will verify that /// under the service's `Parameters` subkey there is an `Installer` key with /// the value `qsu`. pub fn uninstall(svcname: &str) -> Result<(), Error> { // Only allow uninstallation of services that have an Installer=qsu key in // its Parameters subkey. if let Ok(params) = get_service_params_subkey(svcname) { if let Ok(val) = params.get_value::<String, &str>("Installer") { if val != "qsu" { Err(Error::missing( | > > > > | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | /// Uninstalling the wrong service can have spectacularly bad side-effects, so /// qsu goes to some lengths to ensure that only services it installed itself /// can be uninstalled. /// /// Before attempting an actual uninstallation, this function will verify that /// under the service's `Parameters` subkey there is an `Installer` key with /// the value `qsu`. /// /// # Errors /// Problems encountered in the Windows service subsystem will be reported as /// [`Error::SubSystem`]. pub fn uninstall(svcname: &str) -> Result<(), Error> { // Only allow uninstallation of services that have an Installer=qsu key in // its Parameters subkey. if let Ok(params) = get_service_params_subkey(svcname) { if let Ok(val) = params.get_value::<String, &str>("Installer") { if val != "qsu" { Err(Error::missing( |
︙ | ︙ | |||
197 198 199 200 201 202 203 | let manager_access = ServiceManagerAccess::CONNECT; let service_manager = ServiceManager::local_computer(None::<&str>, manager_access)?; let service_access = ServiceAccess::QUERY_STATUS | ServiceAccess::STOP | ServiceAccess::DELETE; | | | | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | let manager_access = ServiceManagerAccess::CONNECT; let service_manager = ServiceManager::local_computer(None::<&str>, manager_access)?; let service_access = ServiceAccess::QUERY_STATUS | ServiceAccess::STOP | ServiceAccess::DELETE; let service = service_manager.open_service(svcname, service_access)?; // Make sure service is stopped before trying to delete it loop { let service_status = service.query_status()?; if service_status.current_state == ServiceState::Stopped { break; } println!("==> Requesting service '{svcname}' to stop .."); service.stop()?; thread::sleep(Duration::from_secs(2)); } //println!("==> Removing service '{}' ..", service_name); service.delete()?; |
︙ | ︙ |
Changes to src/lib.rs.
︙ | ︙ | |||
52 53 54 55 56 57 58 | use std::{ffi::OsStr, path::Path}; pub use async_trait::async_trait; pub use lumberjack::LumberJack; | | > > > | > > > > > > | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | use std::{ffi::OsStr, path::Path}; pub use async_trait::async_trait; pub use lumberjack::LumberJack; pub use err::Error; #[cfg(any(feature = "rt", feature = "installer"))] #[cfg_attr(docsrs, doc(cfg(any(feature = "rt", feature = "installer"))))] pub use err::CbErr; #[cfg(feature = "tokio")] pub use tokio; #[cfg(feature = "rocket")] pub use rocket; pub use log; pub use tracing; #[cfg(feature = "clap")] #[cfg_attr(docsrs, doc(cfg(feature = "clap")))] pub use clap; /// Attempt to derive a default service name based on the executable's name. /// /// The idea is to get the current executable's file name and strip it's /// extension (if there is one). The file stem name is the default service /// name. On macos the name will be prefixed by `local.`. #[must_use] pub fn default_service_name() -> Option<String> { let binary_path = ::std::env::current_exe().ok()?; let name = binary_path.file_name()?; let name = Path::new(name); let name = name.file_stem()?; mkname(name) } #[cfg(not(target_os = "macos"))] fn mkname(nm: &OsStr) -> Option<String> { nm.to_str().map(String::from) } #[cfg(target_os = "macos")] fn mkname(nm: &OsStr) -> Option<String> { nm.to_str().map(|x| format!("local.{x}")) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/lumberjack.rs.
︙ | ︙ | |||
43 44 45 46 47 48 49 | /// /// The `tracing` trace level will use the environment variable /// `TRACE_FILTER` in a similar manner, but defaults to `none`. /// /// If the environment variable `TRACE_FILE` is set the value will be the /// used as the file name to write the trace logs to. fn default() -> Self { | | | < | < < | | | | < < < < | < < < < | > > > | > > > | > | > > > > > > | | < < < | > | | | | | | | | < < < < < < | | | | | | | | | | | | | | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 | /// /// The `tracing` trace level will use the environment variable /// `TRACE_FILTER` in a similar manner, but defaults to `none`. /// /// If the environment variable `TRACE_FILE` is set the value will be the /// used as the file name to write the trace logs to. fn default() -> Self { let log_level = std::env::var("LOG_LEVEL").map_or(LogLevel::Warn, |level| { level .parse::<LogLevel>() .map_or(LogLevel::Warn, |level| level) }); let trace_file = std::env::var("TRACE_FILE").map_or(None, |v| Some(PathBuf::from(v))); let trace_filter = env::var("TRACE_FILTER").ok(); Self { init: true, log_out: LogOut::default(), log_level, trace_filter, //log_file: None, trace_file } } } impl LumberJack { /// Create a [`LumberJack::default()`] object. #[must_use] pub fn new() -> Self { Self::default() } /// Do not initialize logging/tracing. /// /// This is useful when running tests. #[must_use] pub fn noinit() -> Self { Self { init: false, ..Default::default() } } #[must_use] pub const fn set_init(mut self, flag: bool) -> Self { self.init = flag; self } /// Load logging/tracing information from a service Parameters subkey. /// /// # Errors /// `Error::SubSystem` #[cfg(windows)] pub fn from_winsvc(svcname: &str) -> Result<Self, Error> { let params = crate::rt::winsvc::get_service_param(svcname)?; let loglevel = params .get_value::<String, &str>("LogLevel") .unwrap_or_else(|_| String::from("warn")) .parse::<LogLevel>() .unwrap_or(LogLevel::Warn); let tracefilter = params.get_value::<String, &str>("TraceFilter"); let tracefile = params.get_value::<String, &str>("TraceFile"); let mut this = Self::new().log_level(loglevel); this.log_out = LogOut::WinEvtLog { svcname: svcname.to_string() }; let this = if let (Ok(tracefilter), Ok(tracefile)) = (tracefilter, tracefile) { this.trace_filter(tracefilter).trace_file(tracefile) } else { this }; Ok(this) } /// Set the `log` logging level. #[must_use] pub const fn log_level(mut self, level: LogLevel) -> Self { self.log_level = level; self } /// Set the `tracing` log level. #[must_use] #[allow(clippy::needless_pass_by_value)] pub fn trace_filter(mut self, filter: impl ToString) -> Self { self.trace_filter = Some(filter.to_string()); self } /// Set a file to which `tracing` log entries are written (rather than to /// write to console). #[must_use] pub fn trace_file<P>(mut self, fname: P) -> Self where P: AsRef<Path> { self.trace_file = Some(fname.as_ref().to_path_buf()); self } /// Commit requested settings to `log` and `tracing`. /// /// # Errors /// Any initialization error is translated into [`Error::LumberJack`]. pub fn init(self) -> Result<(), Error> { if self.init { match self.log_out { LogOut::Console => { init_console_logging()?; } #[cfg(windows)] LogOut::WinEvtLog { svcname } => { eventlog::init(&svcname, log::Level::Trace)?; log::set_max_level(self.log_level.into()); } } if let Some(fname) = self.trace_file { init_file_tracing(fname, self.trace_filter.as_deref()); } else { init_console_tracing(self.trace_filter.as_deref()); } } Ok(()) } } #[derive(Default, Debug, PartialEq, Eq, Clone, Copy)] #[cfg_attr(feature = "clap", derive(ValueEnum))] pub enum LogLevel { /// No logging. #[cfg_attr(feature = "clap", clap(name = "off"))] Off, /// Log errors. #[cfg_attr(feature = "clap", clap(name = "error"))] Error, /// Log warnings and errors. #[cfg_attr(feature = "clap", clap(name = "warn"))] #[default] Warn, /// Log info, warnings and errors. #[cfg_attr(feature = "clap", clap(name = "info"))] Info, /// Log debug, info, warnings and errors. #[cfg_attr(feature = "clap", clap(name = "debug"))] Debug, /// Log trace, debug, info, warninga and errors. #[cfg_attr(feature = "clap", clap(name = "trace"))] Trace } impl FromStr for LogLevel { type Err = String; fn from_str(s: &str) -> Result<Self, Self::Err> { match s { "off" => Ok(Self::Off), "error" => Ok(Self::Error), "warn" => Ok(Self::Warn), "info" => Ok(Self::Info), "debug" => Ok(Self::Debug), "trace" => Ok(Self::Trace), _ => Err(format!("Unknown log level '{s}'")) } } } impl fmt::Display for LogLevel { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { let s = match self { Self::Off => "off", Self::Error => "error", Self::Warn => "warn", Self::Info => "info", Self::Debug => "debug", Self::Trace => "trace" }; write!(f, "{s}") } } impl From<LogLevel> for log::LevelFilter { fn from(ll: LogLevel) -> Self { match ll { LogLevel::Off => Self::Off, LogLevel::Error => Self::Error, LogLevel::Warn => Self::Warn, LogLevel::Info => Self::Info, LogLevel::Debug => Self::Debug, LogLevel::Trace => Self::Trace } } } impl From<LogLevel> for Option<tracing::Level> { fn from(ll: LogLevel) -> Self { match ll { LogLevel::Off => None, LogLevel::Error => Some(tracing::Level::ERROR), LogLevel::Warn => Some(tracing::Level::WARN), LogLevel::Info => Some(tracing::Level::INFO), LogLevel::Debug => Some(tracing::Level::DEBUG), LogLevel::Trace => Some(tracing::Level::TRACE) |
︙ | ︙ | |||
299 300 301 302 303 304 305 | .init(); Ok(()) } pub fn init_console_tracing(filter: Option<&str>) { | < < < < < < < < < < | < | < | < < | < < < | < < < < < < < | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 | .init(); Ok(()) } pub fn init_console_tracing(filter: Option<&str>) { // When running on console, then disable tracing by default let filter = filter.map_or_else(|| EnvFilter::new("none"), EnvFilter::new); tracing_subscriber::fmt() .with_env_filter(filter) //.with_timer(timer) .init(); } /// Optionally set up tracing to a file. /// /// This function will attempt to set up tracing to a file. There are three /// conditions that must be true in order for tracing to a file to be enabled: /// - A filename must be specified (either via the `fname` argument or the /// `TRACE_FILE` environment variable). /// - A trace filter must be specified (either via the `filter` argument or the /// `TRACE_FILTER` environment variable). /// - The file name must be openable for writing. /// /// Because tracing is an optional feature and intended for development only, /// this function will enable tracing if possible, and silently ignore and /// errors. pub fn init_file_tracing<P>(fname: P, filter: Option<&str>) where P: AsRef<Path> { // // If both a trace file name and a trace level // let timer = UtcTime::new(format_description!( "[year]-[month]-[day] [hour]:[minute]:[second]" )); let Ok(f) = fs::OpenOptions::new().create(true).append(true).open(fname) else { return; }; // // If TRACING_FILE is set, then default to filter out at warn level. // Disable all tracing if TRACE_FILTER is not set // let filter = filter.map_or_else(|| EnvFilter::new("warn"), EnvFilter::new); tracing_subscriber::fmt() .with_env_filter(filter) .with_writer(f) .with_ansi(false) .with_timer(timer) .init(); //tracing_subscriber::registry().with(filter).init(); } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt.rs.
︙ | ︙ | |||
14 15 16 17 18 19 20 | //! | Platform service | (Windows Service subsystem, systemd, etc) //! | subsystem | //! +---------------------+ //! </pre> //! //! The primary goal of the _qsu_ runtime is to provide a consistent interface //! that will be mapped to whatever service subsystem it is actually running | | | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | //! | Platform service | (Windows Service subsystem, systemd, etc) //! | subsystem | //! +---------------------+ //! </pre> //! //! The primary goal of the _qsu_ runtime is to provide a consistent interface //! that will be mapped to whatever service subsystem it is actually running //! on. This including no special subsystem at all; such as when running the //! server application as a regular foreground process (which is common during //! development). //! //! # How server applications are implemented and used //! _qsu_ needs to know what kind of runtime the server application expects. //! Server applications pick the runtime type by implementing a trait, of which //! there are currently three recognized types: //! - [`ServiceHandler`] is used for "non-async" server applications. //! - [`TokioServiceHandler`] is used for server applications that run under //! the tokio executor. //! - [`RocketServiceHandler`] is for server applications that are built on top //! of the Rocket HTTP framework. //! //! Each of these implement three methods: //! - `init()` is for initializing the service. //! - `run()` is for running the actual server application. //! - `shutdown()` is for shutting down the server application. //! //! The specifics of these trait methods may look quite different. //! //! Once a service trait has been implemented, the application creates a //! [`RunCtx`] object and calls its [`run()`](RunCtx::run()) method, passing in //! an service implementation object. //! //! Note: Only one service wrapper must be initialized per process. //! //! ## Service Handler semantics //! When a handler is run through the [`RunCtx`] it will first call the //! handler's `init()` method. If it returns `Ok()`, its `run()` method will //! be run. //! //! The handler's `shutdown()` will be called regardless of whether `init()` or //! `run()` was successful (the only precondition for `shutdown()` to be called //! is that `init()` was called). //! //! # Argument parser //! _qsu_ offers an [argument parser](crate::argp::ArgParser), which can //! abstract away much of the runtime management and service registration. mod nosvc; mod rttype; mod signals; #[cfg(all(target_os = "linux", feature = "systemd"))] #[cfg_attr(docsrs, doc(cfg(feature = "systemd")))] |
︙ | ︙ | |||
122 123 124 125 126 127 128 129 | use async_trait::async_trait; #[cfg(feature = "tokio")] use tokio::runtime; use tokio::sync::broadcast; | > > > | | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | use async_trait::async_trait; #[cfg(feature = "tokio")] use tokio::runtime; use tokio::sync::broadcast; #[cfg(all(target_os = "linux", feature = "systemd"))] use sd_notify::NotifyState; use crate::{err::CbErr, lumberjack::LumberJack}; /// The run time environment. /// /// Can be used by the application callbacks to determine whether it is running /// as a service. #[derive(Debug, Clone)] pub enum RunEnv { |
︙ | ︙ | |||
163 164 165 166 167 168 169 | Self::Owned(msg) } } impl AsRef<str> for StateMsg { fn as_ref(&self) -> &str { match self { | | | | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | Self::Owned(msg) } } impl AsRef<str> for StateMsg { fn as_ref(&self) -> &str { match self { Self::Ref(s) => s, Self::Owned(s) => s } } } /// Report the current startup/shutdown state to the platform service /// subsystem. |
︙ | ︙ | |||
193 194 195 196 197 198 199 200 201 202 203 204 205 206 | sr: Arc<dyn StateReporter + Send + Sync>, cnt: Arc<AtomicU32> } impl InitCtx { /// Return context used to identify whether service application is running as /// a foreground process or a system service. pub fn runenv(&self) -> RunEnv { self.re.clone() } /// Report startup state to the system service manager. /// /// For foreground processes and services that do not support startup state | > | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | sr: Arc<dyn StateReporter + Send + Sync>, cnt: Arc<AtomicU32> } impl InitCtx { /// Return context used to identify whether service application is running as /// a foreground process or a system service. #[must_use] pub fn runenv(&self) -> RunEnv { self.re.clone() } /// Report startup state to the system service manager. /// /// For foreground processes and services that do not support startup state |
︙ | ︙ | |||
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 | sr: Arc<dyn StateReporter + Send + Sync>, cnt: Arc<AtomicU32> } impl TermCtx { /// Return context used to identify whether service application is running as /// a foreground process or a system service. pub fn runenv(&self) -> RunEnv { self.re.clone() } /// Report shutdown state to the system service manager. /// /// For foreground processes and services that do not support shutdown state /// notifications this method has no effect. pub fn report(&self, status: Option<StateMsg>) { let checkpoint = self.cnt.fetch_add(1, Ordering::SeqCst); self.sr.stopping(checkpoint, status); } } /// "Synchronous" (non-`async`) server application. /// /// Implement this for an object that wraps a server application that does not /// use an async runtime. pub trait ServiceHandler { | > > > > > > > > | > > > > > | > > > > > | > > | | < < < < | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | sr: Arc<dyn StateReporter + Send + Sync>, cnt: Arc<AtomicU32> } impl TermCtx { /// Return context used to identify whether service application is running as /// a foreground process or a system service. #[must_use] pub fn runenv(&self) -> RunEnv { self.re.clone() } /// Report shutdown state to the system service manager. /// /// For foreground processes and services that do not support shutdown state /// notifications this method has no effect. pub fn report(&self, status: Option<StateMsg>) { let checkpoint = self.cnt.fetch_add(1, Ordering::SeqCst); self.sr.stopping(checkpoint, status); } } /// "Synchronous" (non-`async`) server application. /// /// Implement this for an object that wraps a server application that does not /// use an async runtime. pub trait ServiceHandler { type AppErr; /// Implement to handle service application initialization. /// /// # Errors /// Application-defined error returned from callback will be wrapped in /// [`CbErr::App`] and returned to application. fn init(&mut self, ictx: InitCtx) -> Result<(), Self::AppErr>; /// Implement to run service application. /// /// # Errors /// Application-defined error returned from callback will be wrapped in /// [`CbErr::App`] and returned to application. fn run(&mut self, re: &RunEnv) -> Result<(), Self::AppErr>; /// Implement to handle service application termination. /// /// # Errors /// Application-defined error returned from callback will be wrapped in /// [`CbErr::App`] and returned to application. fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr>; } /// `async` server application built on the tokio runtime. /// /// Implement this for an object that wraps a server application that uses /// tokio as an async runtime. #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] #[async_trait] pub trait TokioServiceHandler { type AppErr; async fn init(&mut self, ictx: InitCtx) -> Result<(), Self::AppErr>; async fn run(&mut self, re: &RunEnv) -> Result<(), Self::AppErr>; async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr>; } /// Rocket server application handler. /// /// While Rocket is built on top of tokio, it \[Rocket\] wants to initialize /// tokio itself. |
︙ | ︙ | |||
292 293 294 295 296 297 298 299 300 301 302 303 304 305 | /// It is recommended that `ctrlc` shutdown and termination signals are /// disabled in each `Rocket` instance's configuration, and allow the _qsu_ /// runtime to be responsible for initiating the `Rocket` shutdown. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] #[async_trait] pub trait RocketServiceHandler { /// Rocket service initialization. /// /// The returned `Rocket`s will be ignited and their shutdown handlers will /// be triggered on shutdown. async fn init( &mut self, ictx: InitCtx | > > | | < | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > | | | | < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | > > | > > | > | | > | < > > > | > | > > > | | | > > | > > > > > > | | > > > | > > > > > > > | > | > > | > > > > > | < | < | > > > | > > > | < < | > > | > > > > > > | | > > > | > > > > > > > | > | > > | > > > > > | > > | > | > | > | > > > > > | > > > | 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 | /// It is recommended that `ctrlc` shutdown and termination signals are /// disabled in each `Rocket` instance's configuration, and allow the _qsu_ /// runtime to be responsible for initiating the `Rocket` shutdown. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] #[async_trait] pub trait RocketServiceHandler { type AppErr; /// Rocket service initialization. /// /// The returned `Rocket`s will be ignited and their shutdown handlers will /// be triggered on shutdown. async fn init( &mut self, ictx: InitCtx ) -> Result<Vec<rocket::Rocket<rocket::Build>>, Self::AppErr>; /// Server application main entry point. /// /// If the `init()` trait method returned `Rocket<Build>` instances to the /// qsu runtime it will ignote these and pass them to `run()`. It is the /// responsibility of this method to launch the rockets and await them. async fn run( &mut self, rockets: Vec<rocket::Rocket<rocket::Ignite>>, re: &RunEnv ) -> Result<(), Self::AppErr>; async fn shutdown(&mut self, tctx: TermCtx) -> Result<(), Self::AppErr>; } /// The means through which the service termination happened. #[derive(Copy, Clone, Debug)] pub enum Demise { /// On unixy platforms, this indicates that the termination was initiated /// via a `SIGINT` signal. When running as a foreground process on Windows /// this indicates that Ctrl+C was issued. Interrupted, /// On unixy platforms, this indicates that the termination was initiated /// via the `SIGTERM` signal. /// /// On Windows, running as a service, this indicates that the service /// subsystem requested service to be shut down. Running as a foreground /// process, this means that Ctrl+Break was issued or that the console /// window was closed. Terminated, /// Reached the end of the service application without any external requests /// to terminate. ReachedEnd } #[derive(Copy, Clone, Debug)] pub enum UserSig { /// SIGUSR1 Sig1, /// SIGUSR2 Sig2 } /// Event notifications that originate from the service subsystem that is /// controlling the server application. #[derive(Copy, Clone, Debug)] pub enum SvcEvt { /// User events. /// /// These will be generated on unixy platform if the process receives /// SIGUSR1 or SIGUSR2. User(UserSig), /// Service subsystem has requested that the server application should pause /// its operations. /// /// Only the Windows service subsystem will emit these events. Pause, /// Service subsystem has requested that the server application should /// resume its operations. /// /// Only the Windows service subsystem will emit these events. Resume, /// Service subsystem has requested that the services configuration should /// be reread. /// /// On Unixy platforms this is triggered by SIGHUP, and is unsupported on /// Windows. ReloadConf, /// The service application is terminating. The `Demise` value indicates /// the reason for the shutdown. Shutdown(Demise) } /// The server application runtime type. // large_enum_variant isn't relevant here because only one instance of this is // ever created for a process. #[allow(clippy::large_enum_variant, clippy::module_name_repetitions)] pub enum SrvAppRt<ApEr> { /// A plain non-async (sometimes referred to as "blocking") server /// application. Sync { svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn ServiceHandler<AppErr = ApEr> + Send> }, /// Initializa a tokio runtime, and call each application handler from an /// async context. #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] Tokio { rtbldr: Option<runtime::Builder>, svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn TokioServiceHandler<AppErr = ApEr> + Send> }, /// Allow Rocket to initialize the tokio runtime, and call each application /// handler from an async context. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] Rocket { svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn RocketServiceHandler<AppErr = ApEr> + Send> } } /// Service runner context. pub struct RunCtx { service: bool, svcname: String, log_init: bool, test_mode: bool } impl RunCtx { /// Run as a systemd service. #[cfg(all(target_os = "linux", feature = "systemd"))] fn systemd<ApEr>(self, st: SrvAppRt<ApEr>) -> Result<(), CbErr<ApEr>> where ApEr: Send { LumberJack::default().set_init(self.log_init).init()?; tracing::debug!("Running service '{}'", self.svcname); let sr = Arc::new(systemd::ServiceReporter {}); let re = RunEnv::Service(Some(self.svcname.clone())); match st { SrvAppRt::Sync { svcevt_handler, rt_handler } => rttype::sync_main(rttype::SyncMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: None, test_mode: self.test_mode }), SrvAppRt::Tokio { rtbldr, svcevt_handler, rt_handler } => rttype::tokio_main( rtbldr, rttype::TokioMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: None } ), #[cfg(feature = "rocket")] SrvAppRt::Rocket { svcevt_handler, rt_handler } => rttype::rocket_main(rttype::RocketMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: None }) } } /// Run as a Windows service. #[cfg(windows)] fn winsvc<ApEr>(self, st: SrvAppRt<ApEr>) -> Result<(), CbErr<ApEr>> where ApEr: Send + 'static { winsvc::run(&self.svcname, st)?; Ok(()) } /// Run as a foreground server fn foreground<ApEr>(self, st: SrvAppRt<ApEr>) -> Result<(), CbErr<ApEr>> where ApEr: Send { LumberJack::default().set_init(self.log_init).init()?; tracing::debug!("Running service '{}'", self.svcname); let sr = Arc::new(nosvc::ServiceReporter {}); match st { SrvAppRt::Sync { svcevt_handler, rt_handler } => rttype::sync_main(rttype::SyncMainParams { re: RunEnv::Foreground, svcevt_handler, rt_handler, sr, svcevt_ch: None, test_mode: self.test_mode }), #[cfg(feature = "tokio")] SrvAppRt::Tokio { rtbldr, svcevt_handler, rt_handler } => rttype::tokio_main( rtbldr, rttype::TokioMainParams { re: RunEnv::Foreground, svcevt_handler, rt_handler, sr, svcevt_ch: None } ), #[cfg(feature = "rocket")] SrvAppRt::Rocket { svcevt_handler, rt_handler } => rttype::rocket_main(rttype::RocketMainParams { re: RunEnv::Foreground, svcevt_handler, rt_handler, sr, svcevt_ch: None }) } } } impl RunCtx { /// Create a new service running context. #[must_use] pub fn new(name: &str) -> Self { Self { service: false, svcname: name.into(), log_init: true, test_mode: false } } /// Enable test mode. /// /// This method is intended for tests only. /// /// qsu performs a few global initialization that will fail if run repeatedly /// within the same process. This causes some problem when running tests, /// because rust may run tests in threads within the same process. #[doc(hidden)] #[must_use] pub const fn test_mode(mut self) -> Self { self.log_init = false; self.test_mode = true; self } /// Disable logging/tracing initialization. /// /// This is useful in tests because tests may run in different threads within /// the same process, causing the log/tracing initialization to panic. #[doc(hidden)] #[must_use] pub const fn log_init(mut self, flag: bool) -> Self { self.log_init = flag; self } /// Reference version of [`RunCtx::log_init()`]. #[doc(hidden)] pub fn log_init_ref(&mut self, flag: bool) -> &mut Self { self.log_init = flag; self } /// Mark this run context to run under the operating system's subservice, if /// one is available on this platform. #[must_use] pub const fn service(mut self) -> Self { self.service = true; self } /// Mark this run context to run under the operating system's subservice, if /// one is available on this platform. pub fn service_ref(&mut self) -> &mut Self { self.service = true; self } #[must_use] pub const fn is_service(&self) -> bool { self.service } /// Launch the application. /// /// If this `RunCtx` has been marked as a _service_ then it will perform the /// appropriate service subsystem integration before running the actual /// server application code. /// /// This function must only be called from the main thread of the process, /// and must be called before any other threads are started. /// /// # Errors /// [`CbErr::App`] is returned, containing application-specific error, if n /// application callback returned an error. [`CbErr::Lib`] indicates that an /// error occurred in the qsu runtime. pub fn run<ApEr>(self, st: SrvAppRt<ApEr>) -> Result<(), CbErr<ApEr>> where ApEr: Send + 'static { if self.service { #[cfg(all(target_os = "linux", feature = "systemd"))] self.systemd(st)?; #[cfg(windows)] self.winsvc(st)?; |
︙ | ︙ | |||
568 569 570 571 572 573 574 | self.foreground(st)?; } Ok(()) } /// Convenience method around [`Self::run()`] using [`SrvAppRt::Sync`]. | > | > | | > > > | > > > > | > | | > > > | > > > > > | > | | > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 | self.foreground(st)?; } Ok(()) } /// Convenience method around [`Self::run()`] using [`SrvAppRt::Sync`]. #[allow(clippy::missing_errors_doc)] pub fn run_sync<ApEr>( self, svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn ServiceHandler<AppErr = ApEr> + Send> ) -> Result<(), CbErr<ApEr>> where ApEr: Send + 'static { self.run(SrvAppRt::Sync { svcevt_handler, rt_handler }) } /// Convenience method around [`Self::run()`] using [`SrvAppRt::Tokio`]. #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] #[allow(clippy::missing_errors_doc)] pub fn run_tokio<ApEr>( self, rtbldr: Option<runtime::Builder>, svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn TokioServiceHandler<AppErr = ApEr> + Send> ) -> Result<(), CbErr<ApEr>> where ApEr: Send + 'static { self.run(SrvAppRt::Tokio { rtbldr, svcevt_handler, rt_handler }) } /// Convenience method around [`Self::run()`] using [`SrvAppRt::Rocket`]. #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] #[allow(clippy::missing_errors_doc)] pub fn run_rocket<ApEr>( self, svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, rt_handler: Box<dyn RocketServiceHandler<AppErr = ApEr> + Send> ) -> Result<(), CbErr<ApEr>> where ApEr: Send + 'static { self.run(SrvAppRt::Rocket { svcevt_handler, rt_handler }) } } /// Internal thread used to run service event handler. #[tracing::instrument(name = "svcevtthread", skip_all)] fn svcevt_thread( mut rx: broadcast::Receiver<SvcEvt>, mut evt_handler: Box<dyn FnMut(SvcEvt) + Send> ) { while let Ok(msg) = rx.blocking_recv() { tracing::debug!("Received {:?}", msg); #[cfg(all(target_os = "linux", feature = "systemd"))] if matches!(msg, SvcEvt::ReloadConf) { // // Reload has been requested -- report RELOADING=1 and MONOTONIC_USEC to // systemd before calling the application callback. // let ts = nix::time::clock_gettime(nix::time::ClockId::CLOCK_MONOTONIC).unwrap(); let s = format!( "RELOADING=1\nMONOTONIC_USEC={}{:06}", ts.tv_sec(), ts.tv_nsec() / 1000 ); tracing::trace!("Sending notification to systemd: {}", s); let custom = NotifyState::Custom(&s); if let Err(e) = sd_notify::notify(false, &[custom]) { log::error!( "Unable to send RELOADING=1 notification to systemd; {}", e ); } } // // Call the application callback // evt_handler(msg); #[cfg(all(target_os = "linux", feature = "systemd"))] if matches!(msg, SvcEvt::ReloadConf) { // // This is a reload; report READY=1 to systemd after the application // callback has been called. // tracing::trace!("Sending notification to systemd: READY=1"); if let Err(e) = sd_notify::notify(false, &[NotifyState::Ready]) { log::error!("Unable to send READY=1 notification to systemd; {}", e); } } // If the event message was either shutdown or terminate, break out of loop // so the thread will terminate if let SvcEvt::Shutdown(_) = msg { tracing::debug!("Terminating thread"); // break out of loop when the service shutdown has been rquested break; } } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/rttype.rs.
1 2 3 4 5 6 7 8 9 10 11 12 13 | //! Runtime types. //! //! This module is used to collect the various supported "runtime types" or //! "run contexts". mod sync; #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] mod rocket; #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] | > > > > > > < | < < | < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | //! Runtime types. //! //! This module is used to collect the various supported "runtime types" or //! "run contexts". mod sync; #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] mod tokio; #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] mod rocket; pub use sync::{main as sync_main, MainParams as SyncMainParams}; #[cfg(feature = "tokio")] #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))] pub use tokio::{main as tokio_main, MainParams as TokioMainParams}; #[cfg(feature = "rocket")] #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))] pub use rocket::{main as rocket_main, MainParams as RocketMainParams}; // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/rttype/rocket.rs.
︙ | ︙ | |||
10 11 12 13 14 15 16 | //! `RocketServiceHandler::init()`. Any `Rocket` instance in this vec will be //! ignited before being passed to `RocketServiceHandler::run()`. //! //! Server applications do not need to use this feature and should return an //! empty vector from `init()` in this case. This also requires the //! application code to trigger a shutdown of each instance itself. | > | > > | | | > > | < | > > > | > | | > | > > > > | > > > | | > | | < > > | > | > > > | > > > | | | > | < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | > > > > > > > | | > > | | | | | < | | | > | < < < < < > | | > | < | < | > | > | > > | > > > | > > > > | > > > > > > | > > > > > | | | > > | < > < < < < < < < < < | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 | //! `RocketServiceHandler::init()`. Any `Rocket` instance in this vec will be //! ignited before being passed to `RocketServiceHandler::run()`. //! //! Server applications do not need to use this feature and should return an //! empty vector from `init()` in this case. This also requires the //! application code to trigger a shutdown of each instance itself. use std::{ sync::{atomic::AtomicU32, Arc}, thread }; use tokio::{sync::broadcast, task}; use killswitch::KillSwitch; use crate::{ err::{AppErrors, CbErr}, rt::{ signals, Demise, InitCtx, RocketServiceHandler, RunEnv, StateReporter, SvcEvt, TermCtx } }; #[cfg(unix)] use crate::rt::UserSig; pub struct MainParams<ApEr> where ApEr: Send { pub(crate) re: RunEnv, pub(crate) svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, pub(crate) rt_handler: Box<dyn RocketServiceHandler<AppErr = ApEr> + Send>, pub(crate) sr: Arc<dyn StateReporter + Send + Sync>, pub(crate) svcevt_ch: Option<(broadcast::Sender<SvcEvt>, broadcast::Receiver<SvcEvt>)> } /// Internal `main()`-like routine for server applications that run one or more /// Rockets as their main application. pub fn main<ApEr>(params: MainParams<ApEr>) -> Result<(), CbErr<ApEr>> where ApEr: Send { rocket::execute(rocket_async_main(params))?; Ok(()) } async fn rocket_async_main<ApEr>( MainParams { re, svcevt_handler, mut rt_handler, sr, svcevt_ch }: MainParams<ApEr> ) -> Result<(), CbErr<ApEr>> where ApEr: Send { let ks = KillSwitch::new(); // If a SvcEvt receiver end-point was handed to us, then use it. The // presumption is that there's a service subsystem somewhere that has // created the channel and is holding one of the end-points. // // Otherwise create our own channel and spawn the monitoring tasks that will // generate events for it. let (tx_svcevt, rx_svcevt) = if let Some((tx_svcevt, rx_svcevt)) = svcevt_ch { (tx_svcevt, rx_svcevt) } else { init_svc_channels(&ks) }; // Call application's init() method. let ictx = InitCtx { re: re.clone(), sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let (rockets, init_apperr) = match rt_handler.init(ictx).await { Ok(rockets) => (rockets, None), Err(e) => (Vec::new(), Some(e)) }; // Ignite rockets so we can get Shutdown contexts for each of the instances. // There are two cases where the rockets vector will be empty: // - if init() returned error. // - the application doesn't want to use the built-in rocket shutdown // support. let mut ignited = vec![]; let mut rocket_shutdowns = vec![]; for rocket in rockets { let rocket = rocket.ignite().await?; rocket_shutdowns.push(rocket.shutdown()); ignited.push(rocket); } // If init() was successful, then do some prepartations and then run the // application run() callback. let run_apperr = if init_apperr.is_none() { // Set the service's state to "started" sr.started(); let mut rx_svcevt2 = rx_svcevt.resubscribe(); // Launch a task that waits for the SvtEvt::Shutdown event. Once it // arrives, tell all rocket instances to gracefully shutdown. // // Note: We don't want to use the killswitch for this because the // killswitch isn't triggered until run() has returned. let jh_graceful_landing = task::spawn(async move { loop { let msg = match rx_svcevt2.recv().await { Ok(msg) => msg, Err(e) => { log::error!("Unable to receive broadcast SvcEvt message, {}", e); break; } }; if let SvcEvt::Shutdown(_) = msg { tracing::trace!("Ask rocket instances to shut down gracefully"); for shutdown in rocket_shutdowns { // Tell this rocket instance to shut down gracefully. shutdown.notify(); } break; } tracing::trace!("Ignored message in task waiting for shutdown"); } }); sr.started(); // Kick off service event monitoring thread before running main app // callback let jh = thread::Builder::new() .name("svcevt".into()) .spawn(|| crate::rt::svcevt_thread(rx_svcevt, svcevt_handler)) .unwrap(); // Run the main service application callback. // // This is basically the service application's "main()". let res = rt_handler.run(ignited, &re).await.err(); // Shut down svcevent thread // // Tell it that an (implicit) shutdown event has occurred. // Duplicates don't matter, because once the first one is processed the // thread will terminate. let _ = tx_svcevt.send(SvcEvt::Shutdown(Demise::ReachedEnd)); // Wait for all task that is waiting for a shutdown event to complete if let Err(e) = jh_graceful_landing.await { log::warn!( "An error was returned from the graceful landing task; {}", e ); } // Wait for service event monitoring thread to terminate let _ = task::spawn_blocking(|| jh.join()).await; res } else { None }; // Always send the first shutdown checkpoint here. Either init() failed or // run retuned. Either way, we're shutting down. sr.stopping(1, None); // Now that the main application has terminated kill off any remaining // auxiliary tasks (read: signal waiters) ks.trigger(); if (ks.finalize().await).is_err() { log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet"); } // Call the application's shutdown() function. let tctx = TermCtx { re, sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let term_apperr = rt_handler.shutdown(tctx).await.err(); // Inform the service subsystem that the the shutdown is complete sr.stopped(); // There can be multiple failures, and we don't want to lose information // about what went wrong, so return an error context that can contain all // callback errors. if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() { let apperrs = AppErrors { init: init_apperr, run: run_apperr, shutdown: term_apperr }; Err(CbErr::SrvApp(apperrs))?; } Ok(()) } fn init_svc_channels( ks: &KillSwitch ) -> (broadcast::Sender<SvcEvt>, broadcast::Receiver<SvcEvt>) { // Create channel used to signal events to application let (tx, rx) = broadcast::channel(16); // ToDo: autoclone let ks2 = ks.clone(); // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event. // ToDo: autoclone let txc = tx.clone(); task::spawn(signals::wait_shutdown( move || { if let Err(e) = txc.send(SvcEvt::Shutdown(Demise::Interrupted)) { log::error!("Unable to send SvcEvt::Shutdown event; {}", e); } }, ks2 )); // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a // Terminate event. // ToDo: autoclone let txc = tx.clone(); // ToDo: autoclone let ks2 = ks.clone(); task::spawn(signals::wait_term( move || { if let Err(e) = txc.send(SvcEvt::Shutdown(Demise::Terminated)) { log::error!("Unable to send SvcEvt::Terminate event; {}", e); } }, ks2 )); // There doesn't seem to be anything equivalent to SIGHUP for Windows // (Services) #[cfg(unix)] { // ToDo: autoclone let ks2 = ks.clone(); // ToDo: autoclone let txc = tx.clone(); task::spawn(signals::wait_reload( move || { if let Err(e) = txc.send(SvcEvt::ReloadConf) { log::error!("Unable to send SvcEvt::ReloadConf event; {}", e); } }, ks2 )); } #[cfg(unix)] { let ks2 = ks.clone(); let txc = tx.clone(); task::spawn(signals::wait_user1( move || { if let Err(e) = txc.send(SvcEvt::User(UserSig::Sig1)) { log::error!("Unable to send SvcEvt::User(Sig1) event; {}", e); } }, ks2 )); } #[cfg(unix)] { let ks2 = ks.clone(); let txc = tx.clone(); task::spawn(signals::wait_user2( move || { if let Err(e) = txc.send(SvcEvt::User(UserSig::Sig2)) { log::error!("Unable to send SvcEvt::User(Sig2) event; {}", e); } }, ks2 )); } (tx, rx) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/rttype/sync.rs.
|
| > | > > | | | | < < | | > | | > | > > > > > > > > > > > > | > | > > > > > > | > | > > > > > > > > > > > > | | | | | < | > > > > > > > > > > > | > > > > > > > > > > > > | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 | use std::{ sync::{atomic::AtomicU32, Arc}, thread }; use tokio::sync::broadcast; use crate::{ err::{AppErrors, CbErr}, rt::{ signals, Demise, InitCtx, RunEnv, ServiceHandler, StateReporter, SvcEvt, TermCtx } }; #[cfg(unix)] use crate::rt::{signals::SigType, UserSig}; pub struct MainParams<ApEr> { pub(crate) re: RunEnv, pub(crate) svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, pub(crate) rt_handler: Box<dyn ServiceHandler<AppErr = ApEr>>, pub(crate) sr: Arc<dyn StateReporter + Send + Sync>, pub(crate) svcevt_ch: Option<(broadcast::Sender<SvcEvt>, broadcast::Receiver<SvcEvt>)>, pub(crate) test_mode: bool } /// Internal `main()`-like routine for server applications that run plain old /// non-`async` code. pub fn main<ApEr>( MainParams { re, svcevt_handler, mut rt_handler, sr, svcevt_ch, test_mode }: MainParams<ApEr> ) -> Result<(), CbErr<ApEr>> { // Get rid of unused variable warning #[cfg(unix)] let _ = test_mode; // If a channel was passed from caller, then use it. The assumption is // that there's some other runtime that is monitoring for system // (service-related) events that is holding a sending end-point. // // Otherwise create a new unbounded channel and kick off the appropriate // system event monitoring. let (tx_svcevt, rx_svcevt) = if let Some((tx_svcevt, rx_svcevt)) = svcevt_ch { // Use the broadcast receiver supplied by caller (it likely originates from // a service runtime integration). (tx_svcevt, rx_svcevt) } else { let (tx, rx) = broadcast::channel(16); let tx2 = tx.clone(); #[cfg(unix)] signals::sync_sigmon(move |st| match st { SigType::Usr1 => { if let Err(e) = tx2.send(SvcEvt::User(UserSig::Sig1)) { log::error!("Unable to send SvcEvt::Info event; {}", e); } } SigType::Usr2 => { if let Err(e) = tx2.send(SvcEvt::User(UserSig::Sig2)) { log::error!("Unable to send SvcEvt::Info event; {}", e); } } SigType::Int => { if let Err(e) = tx2.send(SvcEvt::Shutdown(Demise::Interrupted)) { log::error!("Unable to send SvcEvt::Shutdown event; {}", e); } } SigType::Term => { if let Err(e) = tx2.send(SvcEvt::Shutdown(Demise::Terminated)) { log::error!("Unable to send SvcEvt::Terminate event; {}", e); } } SigType::Hup => { if let Err(e) = tx2.send(SvcEvt::ReloadConf) { log::error!("Unable to send SvcEvt::ReloadConf event; {}", e); } } })?; // On Windows, if rx_svcevt is None, means we're not running under the // service subsystem (i.e. we're running as a foreground process), so // register a Ctrl+C handler. #[cfg(windows)] signals::sync_kill_to_event(tx2, test_mode)?; (tx, rx) }; // Call server application's init() method, passing along a startup state // reporter object. let ictx = InitCtx { re: re.clone(), sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let init_apperr = rt_handler.init(ictx).err(); // If init() was successful, set the service's state to "started" and then // call the server application's run() method. let run_apperr = if init_apperr.is_none() { sr.started(); // Kick off service event monitoring thread before running main app // callback let jh = thread::Builder::new() .name("svcevt".into()) .spawn(|| crate::rt::svcevt_thread(rx_svcevt, svcevt_handler)) .unwrap(); // Run the main service application callback. // // This is basically the service application's "main()". let ret = rt_handler.run(&re).err(); // Shut down svcevent thread // // Tell it that an (implicit) shutdown event has occurred. // Duplicates don't matter, because once the first one is processed the // thread will terminate. let _ = tx_svcevt.send(SvcEvt::Shutdown(Demise::ReachedEnd)); // Wait for service event monitoring thread to terminate let _ = jh.join(); ret } else { None }; // Always send the first shutdown checkpoint here. Either init() failed or // run retuned. Either way, we're shutting down. sr.stopping(1, None); // Call the application's shutdown() function, passing along a shutdown state // reporter object. let tctx = TermCtx { re, sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let term_apperr = rt_handler.shutdown(tctx).err(); // Inform the service subsystem that the the shutdown is complete sr.stopped(); // There can be multiple failures, and we don't want to lose information // about what went wrong, so return an error context that can contain all // callback errors. if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() { let apperrs = AppErrors { init: init_apperr, run: run_apperr, shutdown: term_apperr }; Err(CbErr::SrvApp(apperrs))?; } Ok(()) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/rttype/tokio.rs.
|
| > | > > | | > > | < | < | > | | > | > > > > > > > > | > > > | | > | | < > > | > | > > > < < < < < | < | < < < < < < < < < < | < < < < < < < < < < < < | < < < < < | < < < < < < < < < < < < < < | > > > > > > > > > > > | > > > > > > > > > > > > < | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 | use std::{ sync::{atomic::AtomicU32, Arc}, thread }; use tokio::{runtime, sync::broadcast, task}; use crate::{ err::{AppErrors, CbErr}, rt::{ signals, Demise, InitCtx, RunEnv, StateReporter, SvcEvt, TermCtx, TokioServiceHandler } }; use killswitch::KillSwitch; #[cfg(unix)] use crate::rt::UserSig; pub struct MainParams<ApEr> { pub(crate) re: RunEnv, pub(crate) svcevt_handler: Box<dyn FnMut(SvcEvt) + Send>, pub(crate) rt_handler: Box<dyn TokioServiceHandler<AppErr = ApEr> + Send>, pub(crate) sr: Arc<dyn StateReporter + Send + Sync>, pub(crate) svcevt_ch: Option<(broadcast::Sender<SvcEvt>, broadcast::Receiver<SvcEvt>)> } /// Internal `main()`-like routine for server applications that run the tokio /// runtime for `async` code. pub fn main<ApEr>( rtbldr: Option<runtime::Builder>, params: MainParams<ApEr> ) -> Result<(), CbErr<ApEr>> where ApEr: Send { let rt = if let Some(mut bldr) = rtbldr { bldr.build()? } else { tokio::runtime::Runtime::new()? }; rt.block_on(async_main(params))?; Ok(()) } /// The `async` main function for tokio servers. /// /// If `rx_svcevt` is `Some(_)` it means the channel was created elsewhere /// (implied: The transmitting endpoint lives somewhere else). If it is `None` /// the channel needs to be created. async fn async_main<ApEr>( MainParams { re, svcevt_handler, mut rt_handler, sr, svcevt_ch }: MainParams<ApEr> ) -> Result<(), CbErr<ApEr>> where ApEr: Send { let ks = KillSwitch::new(); // If a SvcEvt receiver end-point was handed to us, then use it. Otherwise // create our own and spawn the monitoring tasks that will generate events // for it. let (tx_svcevt, rx_svcevt) = if let Some((tx_svcevt, rx_svcevt)) = svcevt_ch { (tx_svcevt, rx_svcevt) } else { init_svc_channels(&ks) }; // Call application's init() method. let ictx = InitCtx { re: re.clone(), sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let init_apperr = rt_handler.init(ictx).await.err(); let run_apperr = if init_apperr.is_none() { sr.started(); // Kick off service event monitoring thread before running main app // callback let jh = thread::Builder::new() .name("svcevt".into()) .spawn(|| crate::rt::svcevt_thread(rx_svcevt, svcevt_handler)) .unwrap(); // Run the main service application callback. // // This is basically the service application's "main()". let ret = rt_handler.run(&re).await.err(); // Shut down svcevent thread // // Tell it that an (implicit) shutdown event has occurred. // Duplicates don't matter, because once the first one is processed the // thread will terminate. let _ = tx_svcevt.send(SvcEvt::Shutdown(Demise::ReachedEnd)); // Wait for service event monitoring thread to terminate let _ = task::spawn_blocking(|| jh.join()).await; ret } else { None }; // Always send the first shutdown checkpoint here. Either init() failed or // run retuned. Either way, we're shutting down. sr.stopping(1, None); // Now that the main application has terminated kill off any remaining // auxiliary tasks (read: signal waiters) ks.trigger(); if (ks.finalize().await).is_err() { log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet"); } // Call the application's shutdown() function. let tctx = TermCtx { re, sr: Arc::clone(&sr), cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2 }; let term_apperr = rt_handler.shutdown(tctx).await.err(); // Inform the service subsystem that the the shutdown is complete sr.stopped(); // There can be multiple failures, and we don't want to lose information // about what went wrong, so return an error context that can contain all // callback errors. if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() { let apperrs = AppErrors { init: init_apperr, run: run_apperr, shutdown: term_apperr }; Err(CbErr::SrvApp(apperrs))?; } Ok(()) } fn init_svc_channels( ks: &KillSwitch ) -> (broadcast::Sender<SvcEvt>, broadcast::Receiver<SvcEvt>) { // Create channel used to signal events to application let (tx, rx) = broadcast::channel(16); let ks2 = ks.clone(); // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event. let txc = tx.clone(); task::spawn(signals::wait_shutdown( move || { if let Err(e) = txc.send(SvcEvt::Shutdown(Demise::Interrupted)) { log::error!("Unable to send SvcEvt::ReloadConf event; {}", e); } }, ks2 )); // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a // Terminate event. let txc = tx.clone(); let ks2 = ks.clone(); task::spawn(signals::wait_term( move || { let svcevt = SvcEvt::Shutdown(Demise::Terminated); if let Err(e) = txc.send(svcevt) { log::error!("Unable to send SvcEvt::Terminate event; {}", e); } }, ks2 )); // There doesn't seem to be anything equivalent to SIGHUP for Windows // (Services) #[cfg(unix)] { let ks2 = ks.clone(); let txc = tx.clone(); task::spawn(signals::wait_reload( move || { if let Err(e) = txc.send(SvcEvt::ReloadConf) { log::error!("Unable to send SvcEvt::ReloadConf event; {}", e); } }, ks2 )); } #[cfg(unix)] { let ks2 = ks.clone(); let txc = tx.clone(); task::spawn(signals::wait_user1( move || { if let Err(e) = txc.send(SvcEvt::User(UserSig::Sig1)) { log::error!("Unable to send SvcEvt::User(Sig1) event; {}", e); } }, ks2 )); } #[cfg(unix)] { let ks2 = ks.clone(); let txc = tx.clone(); task::spawn(signals::wait_user2( move || { if let Err(e) = txc.send(SvcEvt::User(UserSig::Sig2)) { log::error!("Unable to send SvcEvt::User(Sig2) event; {}", e); } }, ks2 )); } (tx, rx) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/signals.rs.
1 2 3 4 5 6 7 8 9 10 11 12 | //! Signal monitoring. #[cfg(unix)] mod unix; #[cfg(windows)] mod win; #[cfg(unix)] pub use unix::{sync_sigmon, SigType}; #[cfg(all(unix, feature = "tokio"))] | | > > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | //! Signal monitoring. #[cfg(unix)] mod unix; #[cfg(windows)] mod win; #[cfg(unix)] pub use unix::{sync_sigmon, SigType}; #[cfg(all(unix, feature = "tokio"))] pub use unix::{ wait_reload, wait_shutdown, wait_term, wait_user1, wait_user2 }; #[cfg(all(windows, feature = "tokio"))] pub use win::{wait_shutdown, wait_term}; #[cfg(windows)] pub use win::sync_kill_to_event; // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/signals/unix.rs.
︙ | ︙ | |||
14 15 16 17 18 19 20 | /// Async task used to wait for SIGINT/SIGTERM. /// /// Whenever a SIGINT or SIGTERM is signalled the closure in `f` is called and /// the task is terminated. #[cfg(feature = "tokio")] pub async fn wait_shutdown<F>(f: F, ks: KillSwitch) where | | < < | < < < | < < < < < < < < | < < < < < | < < | < < < < < < < < < < < < < < < < | > > | > > > > | > > | > > > > > > > > > > > > > > > > > > > > | > | | < > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | /// Async task used to wait for SIGINT/SIGTERM. /// /// Whenever a SIGINT or SIGTERM is signalled the closure in `f` is called and /// the task is terminated. #[cfg(feature = "tokio")] pub async fn wait_shutdown<F>(f: F, ks: KillSwitch) where F: FnOnce() + Send { wait_oneshot_signal(SignalKind::interrupt(), f, ks).await; } #[cfg(feature = "tokio")] pub async fn wait_term<F>(f: F, ks: KillSwitch) where F: FnOnce() + Send { wait_oneshot_signal(SignalKind::terminate(), f, ks).await; } /// Async task used to wait for SIGHUP /// /// Whenever a SIGHUP is signalled the closure in `f` is called. #[cfg(feature = "tokio")] pub async fn wait_reload<F>(f: F, ks: KillSwitch) where F: Fn() + Send { wait_repeating_signal(SignalKind::hangup(), f, ks).await; } #[cfg(feature = "tokio")] pub async fn wait_user1<F>(f: F, ks: KillSwitch) where F: Fn() + Send { wait_repeating_signal(SignalKind::user_defined1(), f, ks).await; } #[cfg(feature = "tokio")] pub async fn wait_user2<F>(f: F, ks: KillSwitch) where F: Fn() + Send { wait_repeating_signal(SignalKind::user_defined2(), f, ks).await; } #[cfg(feature = "tokio")] pub async fn wait_repeating_signal<F>( sigkind: SignalKind, f: F, ks: KillSwitch ) where F: Fn() + Send { tracing::trace!("Repeating {:?} task launched", sigkind); let Ok(mut sig) = signal(sigkind) else { log::error!("Unable to create {:?} Future", sigkind); return; }; loop { tokio::select! { _ = sig.recv() => { tracing::debug!("Received {:?} -- running closure", sigkind); f(); } () = ks.wait() => { tracing::debug!("killswitch triggered"); break; } } } tracing::trace!("{:?} terminating", sigkind); } #[cfg(feature = "tokio")] pub async fn wait_oneshot_signal<F>(sigkind: SignalKind, f: F, ks: KillSwitch) where F: FnOnce() + Send { tracing::trace!("One-shot {:?} task launched", sigkind); let Ok(mut sig) = signal(sigkind) else { log::error!("Unable to create {:?} Future", sigkind); return; }; // Wait for either SIGUSR1. tokio::select! { _ = sig.recv() => { tracing::debug!("Received {:?} -- running closure", sigkind); f(); } () = ks.wait() => { tracing::debug!("killswitch triggered"); } } tracing::trace!("{:?} terminating", sigkind); } pub enum SigType { Usr1, Usr2, Int, Term, Hup } pub fn sync_sigmon<F>(f: F) -> Result<thread::JoinHandle<()>, Error> where F: Fn(SigType) + Send + 'static { // // Block signals-of-interest on main thread. // let mut ss = SigSet::empty(); ss.add(Signal::SIGINT); ss.add(Signal::SIGTERM); ss.add(Signal::SIGHUP); ss.add(Signal::SIGUSR1); ss.add(Signal::SIGUSR2); let mut oldset = SigSet::empty(); nix::sys::signal::pthread_sigmask( SigmaskHow::SIG_SETMASK, Some(&ss), Some(&mut oldset) ) .unwrap(); let jh = thread::Builder::new() .name("sigmon".into()) .spawn(move || { // Note: Don't need to unblock signals in this thread, because sigwait() // does it implicitly. let mask = unsafe { let mut mask: libc::sigset_t = std::mem::zeroed(); libc::sigemptyset(&mut mask); libc::sigaddset(&mut mask, libc::SIGINT); libc::sigaddset(&mut mask, libc::SIGTERM); libc::sigaddset(&mut mask, libc::SIGHUP); libc::sigaddset(&mut mask, libc::SIGUSR1); libc::sigaddset(&mut mask, libc::SIGUSR2); mask }; loop { let mut sig: libc::c_int = 0; let ret = unsafe { libc::sigwait(&mask, &mut sig) }; if ret == 0 { let signal = Signal::try_from(sig).unwrap(); match signal { Signal::SIGUSR1 => { f(SigType::Usr1); } Signal::SIGUSR2 => { f(SigType::Usr2); } Signal::SIGINT => { f(SigType::Int); break; } Signal::SIGTERM => { f(SigType::Term); break; |
︙ | ︙ |
Changes to src/rt/signals/win.rs.
1 2 | use std::sync::OnceLock; | | > > > < | | > > > | | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | use std::sync::OnceLock; use tokio::sync::broadcast; #[cfg(feature = "tokio")] use {killswitch::KillSwitch, tokio::signal}; use windows_sys::Win32::{ Foundation::{BOOL, FALSE, TRUE}, System::Console::{ SetConsoleCtrlHandler, CTRL_BREAK_EVENT, CTRL_CLOSE_EVENT, CTRL_C_EVENT } }; use crate::{ err::Error, rt::{Demise, SvcEvt} }; static CELL: OnceLock<Box<dyn Fn(u32) -> BOOL + Send + Sync>> = OnceLock::new(); /// Async task used to wait for Ctrl+C to be signalled. /// /// Whenever a Ctrl+C is signalled the closure in `f` is called and /// the task is terminated. #[cfg(feature = "tokio")] pub async fn wait_shutdown<F>(f: F, ks: KillSwitch) where F: FnOnce() + Send { tracing::trace!("CTRL+C task launched"); tokio::select! { _ = signal::ctrl_c() => { tracing::debug!("Received Ctrl+C"); // Once any process termination signal has been received post call the // callback. f(); }, () = ks.wait() => { tracing::debug!("killswitch triggered"); } } tracing::trace!("wait_shutdown() terminating"); } #[cfg(feature = "tokio")] pub async fn wait_term<F>(f: F, ks: KillSwitch) where F: FnOnce() + Send { tracing::trace!("CTRL+Break/Close task launched"); let Ok(mut cbreak) = signal::windows::ctrl_break() else { log::error!("Unable to create Ctrl+Break monitor"); return; }; |
︙ | ︙ | |||
67 68 69 70 71 72 73 | }, _ = cclose.recv() => { tracing::debug!("Received Close"); // Once any process termination signal has been received post call the // callback. f(); }, | | | | | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | }, _ = cclose.recv() => { tracing::debug!("Received Close"); // Once any process termination signal has been received post call the // callback. f(); }, () = ks.wait() => { tracing::debug!("killswitch triggered"); } } tracing::trace!("wait_term() terminating"); } pub fn sync_kill_to_event( tx: broadcast::Sender<SvcEvt>, test_mode: bool ) -> Result<(), Error> { setup_sync_fg_kill_handler( move |ty| { match ty { CTRL_C_EVENT => { tracing::trace!( "Received some kind of event that should trigger a shutdown." ); if tx.send(SvcEvt::Shutdown(Demise::Interrupted)).is_ok() { // We handled this event TRUE } else { FALSE } } CTRL_BREAK_EVENT | CTRL_CLOSE_EVENT => { tracing::trace!( "Received some kind of event that should trigger a termination." ); if tx.send(SvcEvt::Shutdown(Demise::Terminated)).is_ok() { // We handled this event TRUE } else { FALSE } } _ => FALSE } }, test_mode )?; Ok(()) } pub fn setup_sync_fg_kill_handler<F>( f: F, test_mode: bool ) -> Result<(), Error> where F: Fn(u32) -> BOOL + Send + Sync + 'static { // The proper way to do this is to use CELL.set(), because this can only |
︙ | ︙ | |||
140 141 142 143 144 145 146 | .map_err(|_| Error::internal("Unable to set shared OnceLock cell"))?; } let rc = unsafe { SetConsoleCtrlHandler(Some(ctrlhandler), 1) }; // Returns non-zero on success (rc != 0) .then_some(()) | | | 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | .map_err(|_| Error::internal("Unable to set shared OnceLock cell"))?; } let rc = unsafe { SetConsoleCtrlHandler(Some(ctrlhandler), 1) }; // Returns non-zero on success (rc != 0) .then_some(()) .ok_or_else(|| Error::internal("SetConsoleCtrlHandler failed"))?; Ok(()) } unsafe extern "system" fn ctrlhandler(ty: u32) -> BOOL { let Some(f) = CELL.get() else { return FALSE; }; f(ty) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/systemd.rs.
1 2 3 4 5 6 7 8 9 10 11 12 13 | //! systemd service module. //! //! Implements systemd-specific service subsystem interactions. use sd_notify::NotifyState; use super::StateMsg; /// A service reporter that sends notifications to systemd. pub struct ServiceReporter {} impl super::StateReporter for ServiceReporter { fn starting(&self, checkpoint: u32, status: Option<StateMsg>) { | | < < | < > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | //! systemd service module. //! //! Implements systemd-specific service subsystem interactions. use sd_notify::NotifyState; use super::StateMsg; /// A service reporter that sends notifications to systemd. pub struct ServiceReporter {} impl super::StateReporter for ServiceReporter { fn starting(&self, checkpoint: u32, status: Option<StateMsg>) { let text = status.map_or_else( || format!("Startup checkpoint {checkpoint}"), |msg| format!("Starting[{checkpoint}] {}", msg.as_ref()) ); if let Err(e) = sd_notify::notify(false, &[NotifyState::Status(&text)]) { log::error!("Unable to report service started state; {}", e); } } fn started(&self) { |
︙ | ︙ | |||
34 35 36 37 38 39 40 | // ToDo: First checkpoint is 1? if checkpoint == 0 { if let Err(e) = sd_notify::notify(false, &[NotifyState::Stopping]) { log::error!("Unable to report service started state; {}", e); } } | | | < | < > | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | // ToDo: First checkpoint is 1? if checkpoint == 0 { if let Err(e) = sd_notify::notify(false, &[NotifyState::Stopping]) { log::error!("Unable to report service started state; {}", e); } } let text = status.map_or_else( || format!("Stopping checkpoint {checkpoint}"), |msg| format!("Stopping[{checkpoint}] {}", msg.as_ref()) ); // ToDo: Is it okay to set status after "Stopping" has been set? if let Err(e) = sd_notify::notify(false, &[NotifyState::Status(&text)]) { log::error!("Unable to report service started state; {}", e); } } fn stopped(&self) {} } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to src/rt/winsvc.rs.
1 2 3 4 5 6 7 8 | //! Windows service module. use std::{ ffi::OsString, sync::{Arc, OnceLock}, thread, time::Duration }; | > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | //! Windows service module. //! //! While Windows services have a normal `main()` entry point, the way they //! work is that a (blocking) runtime is called, which calls an application //! callback where the service application logic is normally implemented. //! //! qsu does it a little differently; it runs the service application logic in //! a thread (called `svcapp`) and runs the Windows service subsystem on the //! main thread. Its (the service subsystem's) callback is used to monitor the //! subsystem for events, which are passed on to the qsu event handler //! application callback. use std::{ ffi::OsString, sync::{Arc, OnceLock}, thread, time::Duration }; |
︙ | ︙ | |||
23 24 25 26 27 28 29 | }, service_control_handler::{ self, ServiceControlHandlerResult, ServiceStatusHandle }, service_dispatcher }; | | | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | }, service_control_handler::{ self, ServiceControlHandlerResult, ServiceStatusHandle }, service_dispatcher }; use winreg::{enums::HKEY_LOCAL_MACHINE, RegKey}; #[cfg(feature = "wait-for-debugger")] use dbgtools_win::debugger; use crate::{ err::{CbErr, Error}, lumberjack::LumberJack, rt::{rttype, Demise, RunEnv, SrvAppRt, SvcEvt} }; use super::StateMsg; const SERVICE_TYPE: ServiceType = ServiceType::OWN_PROCESS; //const SERVICE_STARTPENDING_TIME: Duration = Duration::from_secs(10); |
︙ | ︙ | |||
71 72 73 74 75 76 77 78 79 | /// Buffer passed back to the application thread from the service subsystem /// thread. struct HandshakeMsg { /// Channel end-point used to send messages to the service subsystem. tx: UnboundedSender<ToSvcMsg>, /// Channel end-point used to receive messages from the service subsystem. | > > > > | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | /// Buffer passed back to the application thread from the service subsystem /// thread. struct HandshakeMsg { /// Channel end-point used to send messages to the service subsystem. tx: UnboundedSender<ToSvcMsg>, /// Channel end-point used to send service event messages to the application /// callback. tx_svcevt: broadcast::Sender<SvcEvt>, /// Channel end-point used to receive messages from the service subsystem. rx_svcevt: broadcast::Receiver<SvcEvt> } /// A service reporter that forwards application state information to the /// windows service subsystem. pub struct ServiceReporter { tx: UnboundedSender<ToSvcMsg> |
︙ | ︙ | |||
118 119 120 121 122 123 124 | log::error!("Unable to send Stopped message; {}", e); } log::trace!("Stopped"); } } | > > > > > > | > > > | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | log::error!("Unable to send Stopped message; {}", e); } log::trace!("Stopped"); } } /// Run a service application under the Windows service subsystem. /// /// # Errors /// `Error::SubSystem` menas the service could not be started. `Error::IO` /// means the internal worker could not be launched. #[allow(clippy::missing_panics_doc)] pub fn run<ApEr>(svcname: &str, st: SrvAppRt<ApEr>) -> Result<(), Error> where ApEr: Send + 'static { #[cfg(feature = "wait-for-debugger")] { debugger::wait_for_then_break(); debugger::output("Hello, debugger"); } // Create a one-shot channel used to receive a an initial handshake from the |
︙ | ︙ | |||
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | .name("svcapp".into()) .spawn(move || srvapp_thread(st, svcnm, rx_fromsvc))?; // Register generated `ffi_service_main` with the system and start the // service, blocking this thread until the service is stopped. service_dispatcher::start(svcname, ffi_service_main)?; match jh.join() { Ok(_) => Ok(()), Err(e) => *e .downcast::<Result<(), Error>>() .expect("Unable to downcast error from svcapp thread") } } | > > > > | | | > > > < | > > > > > | | > > > > > > > > | | | > > > | > > > > > > > | > > | > > | > > > > > | | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | .name("svcapp".into()) .spawn(move || srvapp_thread(st, svcnm, rx_fromsvc))?; // Register generated `ffi_service_main` with the system and start the // service, blocking this thread until the service is stopped. service_dispatcher::start(svcname, ffi_service_main)?; // The return value should be hard-coded to `Result<(), Error>`, so this // unwrap should be okay. match jh.join() { Ok(_) => Ok(()), Err(e) => *e .downcast::<Result<(), Error>>() .expect("Unable to downcast error from svcapp thread") } } /// Internal server application wrapper thread. fn srvapp_thread<ApEr>( st: SrvAppRt<ApEr>, svcname: String, rx_fromsvc: oneshot::Receiver<Result<HandshakeMsg, Error>> ) -> Result<(), CbErr<ApEr>> where ApEr: Send { // Wait for the service subsystem to report that it has initialized. // It passes along a channel end-point that can be used to send events to // the service manager. let Ok(res) = rx_fromsvc.blocking_recv() else { panic!("Unable to receive handshake"); }; let Ok(HandshakeMsg { tx, tx_svcevt, rx_svcevt }) = res else { panic!("Unable to receive handshake"); }; let sr = Arc::new(ServiceReporter { tx }); let re = RunEnv::Service(Some(svcname)); match st { SrvAppRt::Sync { svcevt_handler, rt_handler } => rttype::sync_main(rttype::SyncMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: Some((tx_svcevt, rx_svcevt)), // Don't support test mode when running as a windows service test_mode: false }), #[cfg(feature = "tokio")] SrvAppRt::Tokio { rtbldr, svcevt_handler, rt_handler } => rttype::tokio_main( rtbldr, rttype::TokioMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: Some((tx_svcevt, rx_svcevt)) } ), #[cfg(feature = "rocket")] SrvAppRt::Rocket { svcevt_handler, rt_handler } => rttype::rocket_main(rttype::RocketMainParams { re, svcevt_handler, rt_handler, sr, svcevt_ch: Some((tx_svcevt, rx_svcevt)) }) } } // Generate the windows service boilerplate. The boilerplate contains the // low-level service entry function (ffi_service_main) that parses incoming // service arguments into Vec<OsString> and passes them to user defined service |
︙ | ︙ | |||
223 224 225 226 227 228 229 230 231 232 233 234 235 236 | handshake_reply: HandshakeMsg, rx_tosvc: UnboundedReceiver<ToSvcMsg>, status_handle: ServiceStatusHandle } fn my_service_main(_arguments: Vec<OsString>) { // Start by pulling out the service name and the channel sender. let Xfer { svcname, tx_fromsvc } = take_shared_buffer(); | > > > | 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | handshake_reply: HandshakeMsg, rx_tosvc: UnboundedReceiver<ToSvcMsg>, status_handle: ServiceStatusHandle } /// Windows Service main entry point. #[allow(clippy::needless_pass_by_value)] fn my_service_main(_arguments: Vec<OsString>) { // Start by pulling out the service name and the channel sender. let Xfer { svcname, tx_fromsvc } = take_shared_buffer(); |
︙ | ︙ | |||
246 247 248 249 250 251 252 | // server application. if tx_fromsvc.send(Ok(handshake_reply)).is_err() { log::error!("Unable to send handshake message"); return; } // Enter a loop that waits to receive a service termination event. | | < < | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | // server application. if tx_fromsvc.send(Ok(handshake_reply)).is_err() { log::error!("Unable to send handshake message"); return; } // Enter a loop that waits to receive a service termination event. svcloop(rx_tosvc, status_handle); } Err(e) => { // If svcinit() returns Err() we don't actually know if logging has been // enabled yet -- but we can't do much other than hope that it is and try // to output an error log. // ToDo: If dbgtools-win is used, then we should output to the debugger. if tx_fromsvc.send(Err(e)).is_err() { |
︙ | ︙ | |||
279 280 281 282 283 284 285 | // main thread. // ToDo: Need proper error handling: // - If the Paramters subkey can not be loaded, do we abort? // - If the cwd can not be changed to the WorkDir we should abort. if let Ok(svcparams) = get_service_params_subkey(svcname) { if let Ok(wd) = svcparams.get_value::<String, &str>("WorkDir") { std::env::set_current_dir(wd).map_err(|e| { | | > > | | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 | // main thread. // ToDo: Need proper error handling: // - If the Paramters subkey can not be loaded, do we abort? // - If the cwd can not be changed to the WorkDir we should abort. if let Ok(svcparams) = get_service_params_subkey(svcname) { if let Ok(wd) = svcparams.get_value::<String, &str>("WorkDir") { std::env::set_current_dir(wd).map_err(|e| { Error::internal(format!("Unable to switch to WorkDir; {e}")) })?; } } // Create channel that will be used to receive messages from the application. let (tx_tosvc, rx_tosvc) = unbounded_channel(); // Create channel that will be used to send messages to the application. let (tx_svcevt, rx_svcevt) = broadcast::channel(16); // // Define system service event handler that will be receiving service events. // // ToDo: autoclone let tx_svcevt2 = tx_svcevt.clone(); let event_handler = move |control_event| -> ServiceControlHandlerResult { match control_event { ServiceControl::Interrogate => { log::debug!("svc signal recieved: interrogate"); // Notifies a service to report its current status information to the // service control manager. Always return NoError even if not // implemented. ServiceControlHandlerResult::NoError } ServiceControl::Stop => { log::debug!("svc signal recieved: stop"); // Message application that it's time to shutdown if let Err(e) = tx_svcevt2.send(SvcEvt::Shutdown(Demise::Terminated)) { log::error!("Unable to send SvcEvt::Shutdown from winsvc; {}", e); } ServiceControlHandlerResult::NoError } ServiceControl::Continue => { log::debug!("svc signal recieved: continue"); |
︙ | ︙ | |||
350 351 352 353 354 355 356 | ); Err(e)?; } Ok(InitRes { handshake_reply: HandshakeMsg { tx: tx_tosvc, | > | > > > > > > > | | > > | > | | | | | | | | | | | | | | < | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < < | | | | | | | | | | | | | | | | < < < < < < < < < > > > > > > > > > > > > > | > > > > > | > > > > > | > > > > > | 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 | ); Err(e)?; } Ok(InitRes { handshake_reply: HandshakeMsg { tx: tx_tosvc, tx_svcevt, rx_svcevt }, rx_tosvc, status_handle }) } /// Internal service state loop. /// /// Receives the current service application state from an internal channel and /// uses it to report the state to the windows service subsystem. /// /// Once a _Stopped_ state is received, the state will be reported to winsvc /// subsystem, and the loop will be broken out of so the thread exits. fn svcloop( mut rx_tosvc: UnboundedReceiver<ToSvcMsg>, status_handle: ServiceStatusHandle ) { // // Enter loop that waits for application state changes that should be // reported to the service subsystem. // Once the application reports that it has stopped, then break out of the // loop. // tracing::trace!("enter app state monitoring loop"); loop { let Some(ev) = rx_tosvc.blocking_recv() else { // All the sender halves have been deallocated log::error!("Sender endpoints unexpectedly disappeared"); break; }; match ev { ToSvcMsg::Starting(checkpoint) => { log::debug!("app reported that it is running"); if let Err(e) = status_handle.set_service_status(ServiceStatus { service_type: SERVICE_TYPE, current_state: ServiceState::StartPending, controls_accepted: ServiceControlAccept::empty(), exit_code: ServiceExitCode::Win32(0), checkpoint, wait_hint: SERVICE_STARTPENDING_TIME, process_id: None }) { log::error!( "Unable to set service status to 'start pending {checkpoint}'; \ {e}" ); } } ToSvcMsg::Started => { if let Err(e) = status_handle.set_service_status(ServiceStatus { service_type: SERVICE_TYPE, current_state: ServiceState::Running, controls_accepted: ServiceControlAccept::STOP, exit_code: ServiceExitCode::Win32(0), checkpoint: 0, wait_hint: Duration::default(), process_id: None }) { log::error!("Unable to set service status to 'started'; {e}"); } } ToSvcMsg::Stopping(checkpoint) => { log::debug!("app is shutting down"); if let Err(e) = status_handle.set_service_status(ServiceStatus { service_type: SERVICE_TYPE, current_state: ServiceState::StopPending, controls_accepted: ServiceControlAccept::empty(), exit_code: ServiceExitCode::Win32(0), checkpoint, wait_hint: SERVICE_STOPPENDING_TIME, process_id: None }) { log::error!( "Unable to set service status to 'stop pending {checkpoint}'; {e}" ); } } ToSvcMsg::Stopped => { if let Err(e) = status_handle.set_service_status(ServiceStatus { service_type: SERVICE_TYPE, current_state: ServiceState::Stopped, controls_accepted: ServiceControlAccept::empty(), exit_code: ServiceExitCode::Win32(0), checkpoint: 0, wait_hint: Duration::default(), process_id: None }) { log::error!("Unable to set service status to 'stopped'; {e}"); } // Break out of loop to terminate service subsystem break; } } } tracing::trace!("service terminated"); } const SVCPATH: &str = "SYSTEM\\CurrentControlSet\\Services"; const PARAMS: &str = "Parameters"; /// Create a read-only handle service's registry subkey. /// /// `HKLM:SYSTEM\CurrentControlSet\Services\[servicename]` /// /// # Errors /// Registry errors are returned as `Error::SubSystem`. pub fn read_service_subkey( service_name: &str ) -> Result<winreg::RegKey, Error> { let hklm = RegKey::predef(HKEY_LOCAL_MACHINE); let services = hklm.open_subkey(SVCPATH)?; let subkey = services.open_subkey(service_name)?; Ok(subkey) } /// Create a read/write handle for service's registry subkey. /// /// `HKLM:SYSTEM\CurrentControlSet\Services\[servicename]` /// /// # Errors /// Registry errors are returned as `Error::SubSystem`. pub fn write_service_subkey( service_name: &str ) -> Result<winreg::RegKey, Error> { let hklm = RegKey::predef(HKEY_LOCAL_MACHINE); let services = hklm.open_subkey(SVCPATH)?; let subkey = services.open_subkey_with_flags(service_name, winreg::enums::KEY_WRITE)?; Ok(subkey) } /// Create a `Parameters` subkey for a service, and return a handle to it /// /// `HKLM:SYSTEM\CurrentControlSet\Services\[servicename]\Parameters` /// /// # Errors /// Registry errors are returned as `Error::SubSystem`. pub fn create_service_params( service_name: &str ) -> Result<winreg::RegKey, Error> { let hklm = RegKey::predef(HKEY_LOCAL_MACHINE); let services = hklm.open_subkey(SVCPATH)?; let asrv = services.open_subkey(service_name)?; let (subkey, _disp) = asrv.create_subkey(PARAMS)?; Ok(subkey) } /// Get a read/write handle for the `Parameters` subkey for a service. /// /// `HKLM:SYSTEM\CurrentControlSet\Services\[servicename]\Parameters` /// /// # Errors /// Registry errors are returned as `Error::SubSystem`. pub fn get_service_params_subkey( service_name: &str ) -> Result<winreg::RegKey, Error> { let hklm = RegKey::predef(HKEY_LOCAL_MACHINE); let services = hklm.open_subkey(SVCPATH)?; let asrv = services.open_subkey(service_name)?; let subkey = asrv.open_subkey(PARAMS)?; Ok(subkey) } /// Load a service `Parameter` from the registry. /// /// `HKLM:SYSTEM\CurrentControlSet\Services\[servicename]\Parameters` /// /// # Errors /// Registry errors are returned as `Error::SubSystem`. pub fn get_service_param(service_name: &str) -> Result<winreg::RegKey, Error> { let hklm = RegKey::predef(HKEY_LOCAL_MACHINE); let services = hklm.open_subkey(SVCPATH)?; let asrv = services.open_subkey(service_name)?; let params = asrv.open_subkey(PARAMS)?; Ok(params) } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to tests/apperr.rs.
︙ | ︙ | |||
16 17 18 19 20 21 22 | #[test] fn error_from_sync_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > > | | > > | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | #[test] fn error_from_sync_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MySyncService { visited: Arc::clone(&visited), ..Default::default() } .fail_init() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_sync(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only init() failed assert!(errs.init.is_some()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.init.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Sync::init()"); // Failing init should cause run() not to be called, but shutdown() should // sitll be called. let visited = Arc::into_inner(visited) |
︙ | ︙ | |||
58 59 60 61 62 63 64 | #[test] fn error_from_sync_run() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | | > > | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | #[test] fn error_from_sync_run() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MySyncService { visited: Arc::clone(&visited), ..Default::default() } .fail_run() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_sync(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only run() failed assert!(errs.init.is_none()); assert!(errs.run.is_some()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.run.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Sync::run()"); // Failing run should not hinder shutdown() from being called. let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ | |||
99 100 101 102 103 104 105 | #[test] fn error_from_sync_shutdown() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | | > > | | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | #[test] fn error_from_sync_shutdown() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MySyncService { visited: Arc::clone(&visited), ..Default::default() } .fail_shutdown() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_sync(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only shutdown() failed assert!(errs.init.is_none()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_some()); let Error::Hello(s) = errs.shutdown.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Sync::shutdown()"); // All callbacks should have been visited let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ | |||
142 143 144 145 146 147 148 | #[test] fn error_from_tokio_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | #[test] fn error_from_tokio_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyTokioService { visited: Arc::clone(&visited), ..Default::default() } .fail_init() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_tokio(None, svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only init() failed assert!(errs.init.is_some()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.init.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Tokio::init()"); // Failing init should cause run() not to be called, but shutdown() should // sitll be called. let visited = Arc::into_inner(visited) |
︙ | ︙ | |||
185 186 187 188 189 190 191 | #[test] fn error_from_tokio_run() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | #[test] fn error_from_tokio_run() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyTokioService { visited: Arc::clone(&visited), ..Default::default() } .fail_run() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_tokio(None, svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only run() failed assert!(errs.init.is_none()); assert!(errs.run.is_some()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.run.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Tokio::run()"); // Failing run should not hinder shutdown() from being called. let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ | |||
228 229 230 231 232 233 234 | #[test] fn error_from_tokio_shutdown() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 | #[test] fn error_from_tokio_shutdown() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyTokioService { visited: Arc::clone(&visited), ..Default::default() } .fail_shutdown() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_tokio(None, svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only shutdown() failed assert!(errs.init.is_none()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_some()); let Error::Hello(s) = errs.shutdown.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Tokio::shutdown()"); // All callbacks should have been visited let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ | |||
271 272 273 274 275 276 277 | #[test] fn error_from_rocket_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | #[test] fn error_from_rocket_init() { let runctx = RunCtx::new(SVCNAME).test_mode(); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyRocketService { visited: Arc::clone(&visited), ..Default::default() } .fail_init() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_rocket(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only init() failed assert!(errs.init.is_some()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.init.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Rocket::init()"); // Failing init should cause run() not to be called, but shutdown() should // sitll be called. let visited = Arc::into_inner(visited) |
︙ | ︙ | |||
314 315 316 317 318 319 320 | #[test] fn error_from_rocket_run() { let runctx = RunCtx::new(SVCNAME).log_init(false); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 | #[test] fn error_from_rocket_run() { let runctx = RunCtx::new(SVCNAME).log_init(false); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyRocketService { visited: Arc::clone(&visited), ..Default::default() } .fail_run() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_rocket(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only run() failed assert!(errs.init.is_none()); assert!(errs.run.is_some()); assert!(errs.shutdown.is_none()); let Error::Hello(s) = errs.run.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Rocket::run()"); // Failing run should not hinder shutdown() from being called. let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ | |||
356 357 358 359 360 361 362 | #[test] fn error_from_rocket_shutdown() { let runctx = RunCtx::new(SVCNAME).log_init(false); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); | > | > | > | | 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 | #[test] fn error_from_rocket_shutdown() { let runctx = RunCtx::new(SVCNAME).log_init(false); // Prepare a server application context which keeps track of which callbacks // have been called let visited = Arc::new(Mutex::new(apps::Visited::default())); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new( apps::MyRocketService { visited: Arc::clone(&visited), ..Default::default() } .fail_shutdown() ); // Call RunCtx::run(), expecting an server application callback error. let Err(qsu::CbErr::SrvApp(errs)) = runctx.run_rocket(svcevt_handler, rt_handler) else { panic!("Not expected Err(qsu::Error::SrvApp(_))"); }; // Verify that only shutdown() failed assert!(errs.init.is_none()); assert!(errs.run.is_none()); assert!(errs.shutdown.is_some()); let Error::Hello(s) = errs.shutdown.unwrap() else { panic!("Not expected Error::Hello"); }; assert_eq!(s, "From Rocket::shutdown()"); // All callbacks should have been visited let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") |
︙ | ︙ |
Changes to tests/apps/mod.rs.
1 2 3 4 | use std::sync::Arc; use parking_lot::Mutex; | | | 1 2 3 4 5 6 7 8 9 10 11 12 | use std::sync::Arc; use parking_lot::Mutex; use qsu::rt::{InitCtx, RunEnv, ServiceHandler, TermCtx}; #[cfg(feature = "tokio")] use qsu::rt::TokioServiceHandler; #[cfg(feature = "rocket")] use qsu::{ |
︙ | ︙ | |||
66 67 68 69 70 71 72 | self.fail.shutdown(); self } } impl ServiceHandler for MySyncService { | > > | < < < < | | | 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | self.fail.shutdown(); self } } impl ServiceHandler for MySyncService { type AppErr = Error; fn init(&mut self, _ictx: InitCtx) -> Result<(), Self::AppErr> { self.visited.lock().init = true; if self.fail.init { Err(Error::hello("From Sync::init()"))?; } Ok(()) } fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { self.visited.lock().run = true; if self.fail.run { Err(Error::hello("From Sync::run()"))?; } Ok(()) } fn shutdown(&mut self, _tctx: TermCtx) -> Result<(), Self::AppErr> { self.visited.lock().shutdown = true; if self.fail.shutdown { Err(Error::hello("From Sync::shutdown()"))?; } Ok(()) } } |
︙ | ︙ | |||
123 124 125 126 127 128 129 | self } } #[cfg(feature = "tokio")] #[qsu::async_trait] impl TokioServiceHandler for MyTokioService { | > > | | < < < < | | 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | self } } #[cfg(feature = "tokio")] #[qsu::async_trait] impl TokioServiceHandler for MyTokioService { type AppErr = Error; async fn init(&mut self, _ictx: InitCtx) -> Result<(), Self::AppErr> { self.visited.lock().init = true; if self.fail.init { Err(Error::hello("From Tokio::init()"))?; } Ok(()) } async fn run(&mut self, _re: &RunEnv) -> Result<(), Self::AppErr> { self.visited.lock().run = true; if self.fail.run { Err(Error::hello("From Tokio::run()"))?; } Ok(()) } async fn shutdown(&mut self, _tctx: TermCtx) -> Result<(), Self::AppErr> { self.visited.lock().shutdown = true; if self.fail.shutdown { Err(Error::hello("From Tokio::shutdown()"))?; } Ok(()) } } |
︙ | ︙ | |||
180 181 182 183 184 185 186 187 188 189 | self } } #[cfg(feature = "rocket")] #[qsu::async_trait] impl RocketServiceHandler for MyRocketService { async fn init( &mut self, _ictx: InitCtx | > > | | < | | | 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | self } } #[cfg(feature = "rocket")] #[qsu::async_trait] impl RocketServiceHandler for MyRocketService { type AppErr = Error; async fn init( &mut self, _ictx: InitCtx ) -> Result<Vec<Rocket<Build>>, Self::AppErr> { self.visited.lock().init = true; if self.fail.init { Err(Error::hello("From Rocket::init()"))?; } Ok(Vec::new()) } async fn run( &mut self, _rockets: Vec<Rocket<Ignite>>, _re: &RunEnv ) -> Result<(), Self::AppErr> { self.visited.lock().run = true; if self.fail.run { Err(Error::hello("From Rocket::run()"))?; } Ok(()) } async fn shutdown(&mut self, _tctx: TermCtx) -> Result<(), Self::AppErr> { self.visited.lock().shutdown = true; if self.fail.shutdown { Err(Error::hello("From Rocket::shutdown()"))?; } Ok(()) } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to tests/err/mod.rs.
1 2 3 4 5 6 7 8 9 10 | use std::{fmt, io}; #[derive(Debug)] pub enum Error { Hello(String), IO(String), Qsu(String) } impl std::error::Error for Error {} | < > | < | < < | < < | < | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | use std::{fmt, io}; #[derive(Debug)] pub enum Error { Hello(String), IO(String), Qsu(String) } impl std::error::Error for Error {} impl Error { #[allow(clippy::needless_pass_by_value)] pub fn hello(msg: impl ToString) -> Self { Self::Hello(msg.to_string()) } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Self::Hello(s) => write!(f, "Hello error; {s}"), Self::IO(s) => write!(f, "I/O error; {s}"), Self::Qsu(s) => write!(f, "qsu error; {s}") } } } impl From<io::Error> for Error { fn from(err: io::Error) -> Self { Self::IO(err.to_string()) } } impl From<qsu::CbErr<Self>> for Error { fn from(err: qsu::CbErr<Self>) -> Self { Self::Qsu(err.to_string()) } } /* /// Convenience converter used to pass an application-defined errors from the /// qsu inner runtime back out from the qsu runtime. impl From<Error> for qsu::Error { fn from(err: Error) -> qsu::Error { qsu::Error::app(err) } } */ impl From<Error> for qsu::CbErr<Error> { fn from(err: Error) -> Self { Self::App(err) } } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to tests/initrunshutdown.rs.
︙ | ︙ | |||
19 20 21 22 23 24 25 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); | > | < | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new(apps::MySyncService { visited: Arc::clone(&visited), ..Default::default() }); let Ok(()) = runctx.run_sync(svcevt_handler, rt_handler) else { panic!("run_sync() unexpectedly failed"); }; let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") .into_inner(); assert!(visited.init); |
︙ | ︙ | |||
51 52 53 54 55 56 57 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); | > | < | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new(apps::MyTokioService { visited: Arc::clone(&visited), ..Default::default() }); let Ok(()) = runctx.run_tokio(None, svcevt_handler, rt_handler) else { panic!("run_sync() unexpectedly failed"); }; let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") .into_inner(); assert!(visited.init); |
︙ | ︙ | |||
84 85 86 87 88 89 90 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); | > | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | assert!(!visited.init); assert!(!visited.run); assert!(!visited.shutdown); let visited = Arc::new(Mutex::new(visited)); let svcevt_handler = Box::new(move |_msg| {}); let rt_handler = Box::new(apps::MyRocketService { visited: Arc::clone(&visited), ..Default::default() }); let Ok(()) = runctx.run_rocket(svcevt_handler, rt_handler) else { panic!("run_sync() unexpectedly failed"); }; let visited = Arc::into_inner(visited) .expect("Unable to into_inner Arc") .into_inner(); assert!(visited.init); assert!(visited.run); assert!(visited.shutdown); } // vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 : |
Changes to www/changelog.md.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # Change log ⚠️ indicates a breaking change. ## [Unreleased] [Details](/vdiff?from=qsu-0.5.0&to=trunk) ### Added ### Changed ### Removed --- ## [0.5.0] - 2024-05-20 [Details](/vdiff?from=qsu-0.4.1&to=qsu-0.5.0) | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | # Change log ⚠️ indicates a breaking change. ## [Unreleased] [Details](/vdiff?from=qsu-0.5.0&to=trunk) ### Added - The reload service event has been implmented. - It is not supported on Windows. (The Windows service subsystem offers no "reload" event). - On unixy systems it is triggered through the SIGHUP signal. - When using this feature on systemd, the server type should be set to `notify-reload`, and qsu will automatically report `RELOADING`, `MONOTONIC_USEC` or `READY` to systemd as needed. - Translate SIGUSR1/SIGUSR2 signals to `SvcEvt`'s on unixy platforms. ### Changed - ⚠️ Rename instances of "trace level" to "trace filter". - ⚠️ `SvcEvt::Shutdown` and `SvcEvt::Terminate` have been merged into the variant `SvcEvt::Shutdown(Demise)`, where `Demise` can be used to determine if the service application is terminating via interruption/Ctrl+C, service shutdown request or if the service application reached reached its end without any external termination requests. - ⚠️ Introduce `CbErr<ApEr>` to carry application-specific error type, and use `Error` to only store library errors. - Installer: - Allow display name and description to be set on unsupported platforms, to allow application code to avoid platform gates. - On Windows, implicitly add `Tcpip` as a service dependency for services marked as a netservice. ### Removed - ⚠️ The `SvcEvtReader` channel end-point has been removed. Previously this was passed to the service handlers' `run()` method. Now, instead, the application passes a service event handler closure when creating the runtime. To simulate the old behavior, an application can pass a closure that forwards all incoming `SvcEvt` events to a channel, which has its receiving end-point stored in the service handler. --- ## [0.5.0] - 2024-05-20 [Details](/vdiff?from=qsu-0.4.1&to=qsu-0.5.0) |
︙ | ︙ |
Changes to www/design-notes.md.
1 2 | # Design Notes | | | | | | < | > | > < > > | | < < > | < | | | < < | < | | < < < | < | | | > | < | | | < | > | < | > > > < > < < < < < < < < > > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | # Design Notes _qsu_'s primary function is to provide a service runtime layer that sits between the operating system's service subsystem and the application server code. It provides abstractions that allow a server to behave in a unified way, regardless of the actual service subsystem type (it even strives to support the same interface and semantics when running as a foreground process). In order to write a service application, the developer first chooses a runtime to integrate against. This basically refers to choosing whether the service runs async or non-async code. Next they implement a runtime type specific trait, which has three methods `init()`, `run()` and `shutdown()`. Some service subsystems have special "startup" and "shutdown" phases. The service handler's trait methods `init()` and `shutdown()` correspond to these phases, and qsu will report (as appropriate) to the service subsystem that these phases have been entered as the appropriate callback methods are called. An application also needs to set up a service event handler. This is a closure that will receive events from the service subsystem. When running as a foreground process, qsu simulates the same behavior so the application will receive the same events. Specifically, when a Window Service is stopped, a `SvcEvt::Shutdown(_)` event will be received by the event handler. When a service application is run in foreground mode, pressing Ctrl+C or Ctrl+Break will cause the same `SvcEvt::Shutdown(_)` to be sent. Once the service handler trait has been implemented and the service event closure has been created, the qsu service runtime is called, passing along the two handlers. At this point qsu will initialize logging and then call the service handler's `init()` implementation, and if successful it will call its `run()` implementation and wait until it returns before calling the `shutdown()` impelementation and then finally terminate the runtime. Applications must themselves translate shutdown events received by the service event handler to an actual termination of the service application. This is typically done using channels, or cancellation tokens. ## Errors There are several application callbacks in qsu; primarily in the service application runtime, but also in the argument parser. It is recognized that service applications that encounter fatal errors may want to return the application-specific error back to the original caller (for instance to return different appropriate exit codes to the calling shell). To allow this callbacks return an error type `CbErr<ApEr>` which is generic over an application-defined type used to carry the application error. However, there are parts of qsu that doesn't have need for this, even parts that become needlessly complicated having to support the generic parameter, so there's an `Error` type that does not carry the `ApEr` generic. `CbErr` can carry the `Error` type within one of its variants. ## ToDo - Document argument parser - Document the service installer facilities |
Changes to www/index.md.
1 2 | # qsu | | | < | < | < | < | < < | < < < < | < < < < | < | < < | | < | < < < | | < | | < | > > | | | > < < < < < < < < < < < | < < < < < < < < > > | < < < < < < | < < < < < > | | | > | > < < < < < < < < < < < < < < < < < < < < < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | # qsu The _qsu_ ("kazoo") crate offers portable service utilities with an ([opinionated](./opinionated.md)) service wrapper runtime. _qsu_'s primary objective is to allow a service developer to focus on the actual service application code, without having to bother with service subsystem-specific integrations -- while at the same time allowing the service application to run as a regular foreground process, without the code needing to diverge between the two. ## Features - _qsu_ offers two primary runtime types: - `Sync` is for "non-async" service applications. - `Tokio` is for [tokio](https://tokio.rs/)-based async service applications. - In addition is offers special-purpose service runtimes for: - `Rocket` is an tokio/async runtime environment with helpers for [Rocket](https://rocket.rs/) applications. - The runtime uses two logging systems; [`log`](https://crates.io/crates/log) and [`tracing`](https://crates.io/crates/tracing) -- it assumes that `log` is used for production logs, and `tracing` is used for developer logs. - It offers an optional command line argument parser, based on [clap](https://crates.io/crates/clap), that provides a (configurable) command line interface for registering, running and deregistering the service. - It provides an installation module that can register/deregister services on Windows, and generate system service unit files for systemd or launcher plist files for launchd. ## Known limitations - There are several assumptions made in _qsu_ about paths being utf-8. Even if a public interface takes a `Path` or `PathBuf`, the function may return an error if the path isn't utf-8 compliant. ## Further reading - [How to use the qsu runtime](./qsurt.md) - This should be read by both both developers and administrators using a service written on top of qsu. - See [design notes](./design-notes.md) for a high-level overview of how the _qsu_ wrapper runtime is implemented. - Examples: - The source repository contains several qsu examples. - The [staticrocket](https://crates.io/crates/staticrocket) crate uses qsu. ## Feature labels in documentation The crate's documentation uses automatically generated feature labels, which currently requires nightly featuers. To build the documentation locally use: ``` RUSTFLAGS="--cfg docsrs" RUSTDOCFLAGS="--cfg docsrs" cargo +nightly doc --all-features ``` ## Change log The details of changes can always be found in the timeline, but for a high-level view of changes between released versions there's a manually maintained [Change Log](./changelog.md). |
︙ | ︙ |
Added www/opinionated.md.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | # Why is qsu opinonated, and what is it opinonated about? Service subsystems can be complicated and offer a lot of features. Even systemd services, which are relatively simple compared to the Windows Service subsystem, has a lot of features and service integration possibilities. It is not the goal of qsu to be able to express every type of service, its goal is merely to facilitate the development of some hand-wavy "normal" service, where the developer should minimally need to worry about platform/subsystem specific integrations. In order to accomplish this it makes a few assumptions: - Some features, like configuration reload events, are available on all platforms, but they will never be generated on some (for instance Windows service subsystem will never generate them). - Linux has different subsystems to manage services. qsu only allows special integration against systemd, and is limited to the `notify` and `notify-reload` service types. - There are two types of logs; "production logs" are for end-users. "developer logs" are for developers and troubleshooting only. qsu uses the `log` crate for "production logs" and `tracing` crate for "developer logs". It assumes ownership of these runtimes, in the sense that it will initialize them and assumes no one else does so. - While developing service applications the developer may not want to have to install the program as a service, but run it as a regular foreground program. Doing this should require minimal diverging code. - qsu claims ownership of certain signals on unixy platforms. On Windows it claims ownership of Ctrl+C and Ctrl+Break when running in foreground mode. ## Agnostic to the developer, but idiomatic to the end-user While it's a goal to provide a simple and unified developer experience when writing service applications (regardless of the service subsystem the application is run on top of), the same is not true from the end-user administrator's perspective. It can be very annoying when foreign idioms are brought into a system -- it's not unreasonable to assume that people want new programs to behave like all the other programs they are accustomed to on their platform. As an example, Windows administrators will likely expect certain service configuration parameters to be stored in the registry, under the service's reguistry subkey. qsu caters to this, and this will inevitably bleed into the developer's domain -- but the goal to limit subsystem-specifics as much as possible for the developer, while enabling idiomatic behaviors for the system administrators. |
Added www/qsurt.md.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | # Using the qsu runtime A service using the qsu runtime can run in two different contexts. One is (perhaps not optimally) referred to as running in the _foreground_, and the other is referred to as running _as a service_. The intended use-case for running a service in foreground mode is during development and initial testing and troubleshooting of a service application. Normally when running in this mode, the service is run from the command line, in a terminal window, and all its logs are written to the terminal. Running _as a service_ means that the service application is assumed to be running under a service subsystem, such a the Windows Service subsystem, or systemd on linux. Logs will be sent to the appropriate system-specific logging system (Windows Event Log on Windows, the systemd journal on linux with systemd). Starting and stopping a qsu application running as a service should be done using the regular system service management tools. When a qsu application uses the qsu argument parser, running the command without any subcommand will cause it to run in foreground mode. Running it with the subcommand `run-service` (though this is configurable) the service application will integrate into the platform's native service subsystem as appropriate. ## Configuring logging When running in foreground mode, production logging is controlled using the `LOG_LEVEL` environment variable. It can be set to `none`, `error`, `warn`, `info`, `debug` and `trace` (in order of increasing verbosity). If the variable is not set, it will default to `warn`. To configure the development logs, use the `TRACE_FILTER` environment variable instead. This log uses the [`tracing_subscriber`](https://crates.io/crates/tracing_subscriber) crate's environment filter. The filter format is feature rich and is is out of scope in this document, but in its simplest form it can be set to the same values as `LOG_LEVEL`. In order to reduce noise it can be configured to only observe certain modules. This is accomplished by disabling all logs, and then only enabling a subset: `TRACE_FILTER="none,qsu::rt=trace"`. The development log defaults to `none`. In some cases it may be helpful to write the development logs to a file. This can be done by setting the `TRACE_FILE` environment variable to the path and filename of the logfile to write developer logs to. On unixy platforms, the same environment variables (`LOG_LEVEL`, `TRACE_FILTER`, `TRACE_FILE`) control logging when running as a service. On Windows, when running as a service, production logging will log to the Windows Events Log. The development logs will be lost unless written to a file. The logging is controlled in the registry under `HKLM:\SYSTEM\CurrentControlSet\Services\[service_name]\Parameters`, where: - The key `LogLevel` takes the role of the `LOG_LEVEL` environment vairable. - The keys `TraceFile` and `TraceFilter` correspond to the environment variables `TRACE_FILE` and `TRACE_LEVEL`. Both these must be configured in the registry to enable development logging. ## Unixy platforms & signals On unixy platforms, the qsu runtime claims ownership of certain signals. `SIGINT` and `SIGTERM` are used to indicate that the service should shut down. `SIGHUP` can be used to instruct the service application to reload its configuration. `SIGUSR1` and `SIGUSR2` are translated to custom application-specific evnts. Applications/libraries must avoid disrupting qsu's signal handlers. ## Windows foreground When running in foreground mode, the qsu runtime on Windows claims ownership of the `Ctrl+C` and `Ctrl+Break` handlers. Applications/libraries must avoid disrupting these. ## Windows service When running under the Windows Service subsystem, qsu will check if the service's `Parameters` registry subkey contains a string called `WorkDir`. If this is set it will set the process current working directory to that directory before calling the service handler's `init()`. |