qsu

Check-in Differences
Login

Check-in Differences

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Difference From qsu-0.0.2 To qsu-0.0.3

2023-10-25
14:27
Remove CbOrigin. Use AppErr for argp errors. check-in: 2190365c01 user: jan tags: trunk
2023-10-23
12:29
Update index.md. check-in: 0d49c4abb4 user: jan tags: qsu-0.0.3, trunk
12:20
Release maintenance. check-in: ef926ac904 user: jan tags: trunk
2023-10-19
03:59
Make a note about staticrocket. check-in: a54c7f6793 user: jan tags: trunk
03:32
Category cleanup. check-in: 83c296e121 user: jan tags: qsu-0.0.2, trunk
03:29
Cleanup. check-in: 00799d58af user: jan tags: trunk

Changes to .efiles.

1
2
3
4
5
6
7
8

9
10
11



12
13
14
15
16
17
18







19
20
21
22
23




24
25
26
27
28
29
1
2
3
4
5
6
7
8
9



10
11
12







13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34








+
-
-
-
+
+
+
-
-
-
-
-
-
-
+
+
+
+
+
+
+





+
+
+
+






Cargo.toml
README.md
www/index.md
www/design-notes.md
www/changelog.md
src/err.rs
src/lib.rs
src/lumberjack.rs
src/rt.rs
src/nosvc.rs
src/systemd.rs
src/winsvc.rs
src/rt/nosvc.rs
src/rt/systemd.rs
src/rt/winsvc.rs
src/signals.rs
src/signals/unix.rs
src/signals/win.rs
src/rttype.rs
src/rttype/sync.rs
src/rttype/tokio.rs
src/rttype/rocket.rs
src/rt/rttype.rs
src/rt/rttype/sync.rs
src/rt/rttype/tokio.rs
src/rt/rttype/rocket.rs
src/rt/signals.rs
src/rt/signals/unix.rs
src/rt/signals/win.rs
src/installer.rs
src/installer/winsvc.rs
src/installer/launchd.rs
src/installer/systemd.rs
src/argp.rs
tests/err/mod.rs
tests/apps/mod.rs
tests/apperr.rs
tests/initrunshutdown.rs
examples/hellosvc.rs
examples/hellosvc-tokio.rs
examples/hellosvc-rocket.rs
examples/err/mod.rs
examples/procres/mod.rs
examples/argp/mod.rs

Changes to Cargo.toml.

1
2
3

4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

22
23

24
25
26
27
28



29
30
31
32

33
34
35
36
37
38
39
40
41
42
43
44
45

46
47
48
49

50
51
52
53
54
55
56
1
2

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

24
25
26



27
28
29
30
31
32

33
34
35
36
37
38
39
40
41
42
43
44
45

46


47

48
49
50
51
52
53
54
55


-
+


















+

-
+


-
-
-
+
+
+



-
+












-
+
-
-

-
+







[package]
name = "qsu"
version = "0.0.2"
version = "0.0.3"
edition = "2021"
license = "0BSD"
categories = [ "asynchronous" ]
keywords = [ "service", "systemd", "winsvc" ]
repository = "https://repos.qrnch.tech/pub/qsu"
description = "Service subsystem wrapper."
rust-version = "1.56"
exclude = [
  ".fossil-settings",
  ".efiles",
  ".fslckout",
  "www",
  "build_docs.sh",
  "Rocket.toml",
  "rustfmt.toml"
]

[features]
default = ["rt"]
clap = ["dep:clap", "dep:itertools"]
full = ["clap", "installer", "rocket", "systemd"]
full = ["clap", "installer", "rocket", "rt", "systemd", "tokio"]
installer = ["dep:sidoc"]
systemd = ["dep:sd-notify"]
#tokio = ["dep:tokio"]
#rocket = ["dep:rocket", "dep:tokio"]
rocket = ["dep:rocket"]
rocket = ["rt", "dep:rocket", "tokio"]
rt = []
tokio = ["rt", "tokio/macros", "tokio/rt-multi-thread", "tokio/signal"]
wait-for-debugger = ["dep:dbgtools-win"]

[dependencies]
async-trait = { version = "0.1.73" }
async-trait = { version = "0.1.74" }
chrono = { version = "0.4.24" }
clap = { version = "4.4.6", optional = true, features = [
  "derive", "env", "string", "wrap_help"
] }
env_logger = { version = "0.10.0" }
futures = { version = "0.3.28" }
itertools = { version = "0.11.0", optional = true }
killswitch = { version = "0.4.2" }
log = { version = "0.4.20" }
parking_lot = { version = "0.12.1" }
rocket = { version = "0.5.0-rc.3", optional = true }
sidoc = { version = "0.1.0", optional = true }
tokio = { version = "1.33.0", features = [
tokio = { version = "1.33.0", features = ["sync"] }
  "macros", "rt-multi-thread", "signal", "sync"
] }
time = { version = "0.3.20", features = ["macros"] }
tracing = { version = "0.1.37" }
tracing = { version = "0.1.40" }

[dependencies.tracing-subscriber]
version = "0.3.17"
default-features = false
features = ["env-filter", "time", "fmt", "ansi"]

[target.'cfg(target_os = "linux")'.dependencies]
77
78
79
80
81
82
83
84

85
86
87
88

89
90
91
92

93
76
77
78
79
80
81
82

83
84
85
86

87
88
89
90

91
92







-
+



-
+



-
+


[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs", "--generate-link-to-definition"]

[[example]]
name = "hellosvc"
required-features = ["clap", "installer"]
required-features = ["clap", "installer", "rt"]

[[example]]
name = "hellosvc-tokio"
required-features = ["clap", "installer"]
required-features = ["clap", "installer", "rt", "tokio"]

[[example]]
name = "hellosvc-rocket"
required-features = ["clap", "installer", "rocket"]
required-features = ["clap", "installer", "rt", "rocket"]

Changes to examples/err/mod.rs.

28
29
30
31
32
33
34
















35
36
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+


}

impl From<qsu::Error> for Error {
  fn from(err: qsu::Error) -> Self {
    Error::Qsu(err.to_string())
  }
}

/*
/// Convenience converter used to pass an application-defined errors from the
/// qsu inner runtime back out from the qsu runtime.
impl From<Error> for qsu::Error {
  fn from(err: Error) -> qsu::Error {
    qsu::Error::app(err)
  }
}
*/

impl From<Error> for qsu::AppErr {
  fn from(err: Error) -> qsu::AppErr {
    qsu::AppErr::new(err)
  }
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to examples/hellosvc-rocket.rs.

1
2
3
4
5
6
7
8
9
10





11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26


27
28
29


30
31

32
33
34
35
36
37
38
39
40
41

42
43
44
45
46
47
48
49
50
51
52
53


54
55
56
57
58


59
60
61
62
63


64
65
66
67
68
69
70
71
72
73
74


75
76
77
78
79
80
81
1
2
3
4
5
6
7
8


9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27


28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46

47
48
49
50
51
52
53
54
55
56
57
58

59
60
61
62
63
64

65
66
67
68
69
70

71
72
73
74
75
76
77
78
79
80
81
82

83
84
85
86
87
88
89
90
91








-
-
+
+
+
+
+














-
-
+
+



+
+


+









-
+











-
+
+




-
+
+




-
+
+










-
+
+







#[macro_use]
extern crate rocket;

mod argp;
mod err;
mod procres;

use qsu::{
  argp::ArgParser, RocketServiceHandler, StartState, StopState, SvcEvt,
  SvcEvtReader, SvcType
  argp::ArgParser,
  rt::{
    RocketServiceHandler, SrvAppRt, StartState, StopState, SvcEvt,
    SvcEvtReader
  }
};

use rocket::{Build, Ignite, Rocket};

use err::Error;
use procres::ProcRes;


struct MyService {}

#[qsu::async_trait]
impl RocketServiceHandler for MyService {
  async fn init(
    &mut self,
    _ss: StartState
  ) -> Result<Vec<Rocket<Build>>, qsu::Error> {
    ss: StartState
  ) -> Result<Vec<Rocket<Build>>, qsu::AppErr> {
    tracing::trace!("Running init()");

    let mut rockets = vec![];

    ss.report(Some("Building a rocket!".into()));
    let rocket = rocket::build().mount("/", routes![index]);

    ss.report(Some("Pushing a rocket".into()));
    rockets.push(rocket);

    Ok(rockets)
  }

  async fn run(
    &mut self,
    rockets: Vec<Rocket<Ignite>>,
    mut set: SvcEvtReader
  ) -> Result<(), qsu::Error> {
  ) -> Result<(), qsu::AppErr> {
    for rocket in rockets {
      tokio::task::spawn(async {
        rocket.launch().await.unwrap();
      });
    }

    loop {
      tokio::select! {
        evt = set.arecv() => {
          match evt {
            Some(SvcEvt::Shutdown) => {
              tracing::info!("The service subsystem requested that the application shut down");
              tracing::info!("The service subsystem requested that the
                  application shut down");
              break;
            }
            Some(SvcEvt::Terminate) => {
              tracing::info!(
                "The service subsystem requested that the application terminate"
                "The service subsystem requested that the application
                terminate"
              );
              break;
            }
            Some(SvcEvt::ReloadConf) => {
              tracing::info!("The service subsystem requested that application reload configuration");
              tracing::info!("The service subsystem requested that application
                  reload configuration");
            }
            _ => { }
          }
        }
      }
    }

    Ok(())
  }

  async fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::Error> {
  async fn shutdown(&mut self, ss: StopState) -> Result<(), qsu::AppErr> {
    ss.report(Some(format!("Entered {}", "shutdown").into()));
    tracing::trace!("Running shutdown()");
    Ok(())
  }
}


fn main() -> ProcRes {
91
92
93
94
95
96
97
98

99
100
101
102
103
104
105
101
102
103
104
105
106
107

108
109
110
111
112
113
114
115







-
+







    .expect("Unable to determine default service name");

  // Parse, and process, command line arguments.
  let mut argsproc = argp::AppArgsProc {};
  let ap = ArgParser::new(&svcname, &mut argsproc);
  ap.proc(|| {
    let handler = Box::new(MyService {});
    SvcType::Rocket(handler)
    SrvAppRt::Rocket(handler)
  })?;

  Ok(())
}

#[get("/")]
fn index() -> &'static str {

Changes to examples/hellosvc-tokio.rs.

1
2
3
4
5
6
7
8
9
10
11




12
13
14
15
16
17
18
19
20
21
22


23
24
25
26
27

28
29
30
31
32
33
34
1
2
3
4
5
6
7
8
9


10
11
12
13
14
15
16
17
18
19
20
21
22
23

24
25
26
27
28
29

30
31
32
33
34
35
36
37









-
-
+
+
+
+










-
+
+




-
+







//! Simple service that does nothing other than log/trace every N seconds.

mod argp;
mod err;
mod procres;

use std::time::{Duration, Instant};

use qsu::{
  argp::ArgParser, StartState, StopState, SvcEvt, SvcEvtReader, SvcType,
  TokioServiceHandler
  argp::ArgParser,
  rt::{
    SrvAppRt, StartState, StopState, SvcEvt, SvcEvtReader, TokioServiceHandler
  }
};

use err::Error;
use procres::ProcRes;


struct MyService {}

#[qsu::async_trait]
impl TokioServiceHandler for MyService {
  async fn init(&mut self, _ss: StartState) -> Result<(), qsu::Error> {
  async fn init(&mut self, ss: StartState) -> Result<(), qsu::AppErr> {
    ss.report(Some("Entered init".into()));
    tracing::trace!("Running init()");
    Ok(())
  }

  async fn run(&mut self, mut set: SvcEvtReader) -> Result<(), qsu::Error> {
  async fn run(&mut self, mut set: SvcEvtReader) -> Result<(), qsu::AppErr> {
    const SECS: u64 = 30;
    let mut last_dump = Instant::now() - Duration::from_secs(SECS);
    loop {
      if Instant::now() - last_dump > Duration::from_secs(SECS) {
        log::error!("error");
        log::warn!("warn");
        log::info!("info");
67
68
69
70
71
72
73
74


75
76
77
78
79
80
81
70
71
72
73
74
75
76

77
78
79
80
81
82
83
84
85







-
+
+







        }
      }
    }

    Ok(())
  }

  async fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::Error> {
  async fn shutdown(&mut self, ss: StopState) -> Result<(), qsu::AppErr> {
    ss.report(Some(("Entered shutdown".to_string()).into()));
    tracing::trace!("Running shutdown()");
    Ok(())
  }
}


fn main() -> ProcRes {
91
92
93
94
95
96
97
98

99
100
101
102
103
104
95
96
97
98
99
100
101

102
103
104
105
106
107
108







-
+






    .expect("Unable to determine default service name");

  // Parse, and process, command line arguments.
  let mut argsproc = argp::AppArgsProc {};
  let ap = ArgParser::new(&svcname, &mut argsproc);
  ap.proc(|| {
    let handler = Box::new(MyService {});
    SvcType::Tokio(None, handler)
    SrvAppRt::Tokio(None, handler)
  })?;

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to examples/hellosvc.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14




15
16
17
18
19
20
21
22
23
24


25
26
27
28
29

30
31
32
33
34
35
36
1
2
3
4
5
6
7
8
9
10
11
12


13
14
15
16
17
18
19
20
21
22
23
24
25

26
27
28
29
30
31

32
33
34
35
36
37
38
39












-
-
+
+
+
+









-
+
+




-
+







//! Simple service that does nothing other than log/trace every N seconds.

mod argp;
mod err;
mod procres;

use std::{
  thread,
  time::{Duration, Instant}
};

use qsu::{
  argp::ArgParser, ServiceHandler, StartState, StopState, SvcEvt,
  SvcEvtReader, SvcType
  argp::ArgParser,
  rt::{
    ServiceHandler, SrvAppRt, StartState, StopState, SvcEvt, SvcEvtReader
  }
};

use err::Error;
use procres::ProcRes;


struct MyService {}

impl ServiceHandler for MyService {
  fn init(&mut self, _ss: StartState) -> Result<(), qsu::Error> {
  fn init(&mut self, ss: StartState) -> Result<(), qsu::AppErr> {
    ss.report(Some("Entered init".into()));
    tracing::trace!("Running init()");
    Ok(())
  }

  fn run(&mut self, mut set: SvcEvtReader) -> Result<(), qsu::Error> {
  fn run(&mut self, mut set: SvcEvtReader) -> Result<(), qsu::AppErr> {
    const SECS: u64 = 30;
    let mut last_dump = Instant::now() - Duration::from_secs(SECS);
    loop {
      if Instant::now() - last_dump > Duration::from_secs(SECS) {
        log::error!("error");
        log::warn!("warn");
        log::info!("info");
70
71
72
73
74
75
76
77


78
79
80
81
82
83
84
73
74
75
76
77
78
79

80
81
82
83
84
85
86
87
88







-
+
+








      thread::sleep(std::time::Duration::from_secs(1));
    }

    Ok(())
  }

  fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::Error> {
  fn shutdown(&mut self, ss: StopState) -> Result<(), qsu::AppErr> {
    ss.report(Some(("Entered shutdown".to_string()).into()));
    tracing::trace!("Running shutdown()");
    Ok(())
  }
}


fn main() -> ProcRes {
94
95
96
97
98
99
100
101

102
103
104
105
106
107
98
99
100
101
102
103
104

105
106
107
108
109
110
111







-
+






    .expect("Unable to determine default service name");

  // Parse, and process, command line arguments.
  let mut argsproc = argp::AppArgsProc {};
  let ap = ArgParser::new(&svcname, &mut argsproc);
  ap.proc(|| {
    let handler = Box::new(MyService {});
    SvcType::Sync(handler)
    SrvAppRt::Sync(handler)
  })?;

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to src/argp.rs.

1
2
3
4
5
6
7
8


9
10
11











12
13
14
15
16
17
18
1
2
3
4
5
6
7

8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30







-
+
+



+
+
+
+
+
+
+
+
+
+
+







//! Helpers for integrating clap into an application using _qsu_.

use clap::{builder::Str, Arg, ArgAction, ArgMatches, Args, Command};

use crate::{
  installer::{self, RegSvc},
  lumberjack::LogLevel,
  Error, RunCtx, SvcType
  rt::{RunCtx, SrvAppRt},
  Error
};


/// Modify a `clap` [`Command`] instance to accept common service management
/// subcommands.
///
/// If `inst_subcmd` is `Some()`, it should be the name of the subcommand used
/// to register a service.  If `rm_subcmd_ is `Some()` it should be the name of
/// the subcommand used to deregister a service.  Similarly `run_subcmd` is
/// used to add a subcommand for running the service under a service subsystem
/// [where applicable].
///
/// It is recommended that applications use the higher-level `ArgParser`
/// instead.
pub fn add_subcommands(
  cli: Command,
  svcname: &str,
  inst_subcmd: Option<&str>,
  rm_subcmd: Option<&str>,
  run_subcmd: Option<&str>
) -> Command {
26
27
28
29
30
31
32
33

34
35
36
37
38
39

40
41
42
43

44
45
46
47
48
49
50
38
39
40
41
42
43
44

45
46
47
48
49


50

51
52

53
54
55
56
57
58
59
60







-
+




-
-
+
-


-
+







  let cli = if let Some(subcmd) = rm_subcmd {
    let sub = mk_rm_cmd(subcmd, svcname);
    cli.subcommand(sub)
  } else {
    cli
  };

  let cli = if let Some(subcmd) = run_subcmd {
  if let Some(subcmd) = run_subcmd {
    let sub = mk_run_cmd(subcmd, svcname);
    cli.subcommand(sub)
  } else {
    cli
  };

  }
  cli
}

/// Register service.
/// Service registration context.
#[derive(Debug, Args)]
struct RegSvcArgs {
  /// Autostart service at boot.
  #[arg(short = 's', long)]
  auto_start: bool,

  /// Set an optional display name for the service.
75
76
77
78
79
80
81






82
83
84
85
86
87
88
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104







+
+
+
+
+
+







  #[arg(long, value_enum, hide(true), value_name = "LEVEL")]
  trace_level: Option<LogLevel>,

  #[arg(long, value_enum, hide(true), value_name = "FNAME")]
  trace_file: Option<String>
}

/// Create a `clap` [`Command`] object that accepts service registration
/// arguments.
///
/// It is recommended that applications use the higher-level `ArgParser`
/// instead, but this call exists in case applications need finer grained
/// control.
pub fn mk_inst_cmd(cmd: &str, svcname: &str) -> Command {
  let namearg = Arg::new("svcname")
    .short('n')
    .long("name")
    .action(ArgAction::Set)
    .value_name("SVCNAME")
    .default_value(Str::from(svcname.to_string()))
103
104
105
106
107
108
109






110
111
112
113
114
115
116
117
118
119
120
121
122
123
124

125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140






141
142
143
144
145
146
147
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176







+
+
+
+
+
+















+
















+
+
+
+
+
+







}


/// Deregister service.
#[derive(Debug, Args)]
struct DeregSvcArgs {}

/// Create a `clap` [`Command`] object that accepts service deregistration
/// arguments.
///
/// It is recommended that applications use the higher-level `ArgParser`
/// instead, but this call exists in case applications need finer grained
/// control.
pub fn mk_rm_cmd(cmd: &str, svcname: &str) -> Command {
  let namearg = Arg::new("svcname")
    .short('n')
    .long("name")
    .action(ArgAction::Set)
    .value_name("SVCNAME")
    .default_value(svcname.to_string())
    .help("Name of service to remove");

  let cli = Command::new(cmd.to_string()).arg(namearg);

  DeregSvcArgs::augment_args(cli)
}


/// Parsed service deregistration arguments.
pub struct DeregSvc {
  pub svcname: String
}

impl DeregSvc {
  pub fn from_cmd_match(matches: &ArgMatches) -> Self {
    let svcname = matches.get_one::<String>("svcname").unwrap().to_owned();
    Self { svcname }
  }
}


/// Run service.
#[derive(Debug, Args)]
struct RunSvcArgs {}


/// Create a `clap` [`Command`] object that accepts service running arguments.
///
/// It is recommended that applications use the higher-level `ArgParser`
/// instead, but this call exists in case applications need finer grained
/// control.
pub fn mk_run_cmd(cmd: &str, svcname: &str) -> Command {
  let namearg = Arg::new("svcname")
    .short('n')
    .long("name")
    .action(ArgAction::Set)
    .value_name("SVCNAME")
    .default_value(svcname.to_string())
158
159
160
161
162
163
164

165
166
167
168
169
170
171
172
173
174
175
176

177
178
179
180
181
182
183
184
185
186
187

























188
189
190
191
192
193
194
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210








211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242







+












+



-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+







  RunApp(RunCtx),

  /// Nothing to do (service was probably registered/deregistred).
  Quit
}


/// Parsed service running arguments.
pub struct RunSvc {
  pub svcname: String
}

impl RunSvc {
  pub fn from_cmd_match(matches: &ArgMatches) -> Self {
    let svcname = matches.get_one::<String>("svcname").unwrap().to_owned();
    Self { svcname }
  }
}


/// Allow application to customise behavior of an [`ArgParser`] instance.
pub trait ArgsProc {
  /// Callback allowing application to configure service installation argument
  /// parser.
  fn inst_subcmd(&mut self) {}

  fn rm_subcmd(&mut self) {}

  fn run_subcmd(&mut self) {}

  /// Callback allowing application to configure the service registry context
  /// before the service is registered.
  fn inst_subcmd(&mut self) {
    todo!()
  }

  fn rm_subcmd(&mut self) {
    todo!()
  }

  fn run_subcmd(&mut self) {
    todo!()
  }

  /// Callback allowing application to configure the service registration
  /// context just before the service is registered.
  ///
  /// This trait method can, among other things, be used by an application to:
  /// - Configure a service work directory.
  /// - Add environment variables.
  /// - Add command like arguments to the run command.
  ///
  /// The `sub_m` argument represents `clap`'s parsed subcommand context for
  /// the service registration subcommand.  Applications that want to add
  /// custom arguments to the parser should implement the
  /// [`ArgsProc::inst_subcmd()`] trait method and perform the subcommand
  /// augmentation there.
  ///
  /// The default implementation does nothing but return `regsvc` unmodified.
  #[allow(unused_variables)]
  fn proc_inst(
    &self,
    sub_m: &ArgMatches,
    regsvc: RegSvc
345
346
347
348
349
350
351















352

353
354
355
356

357
358
359
360
361
362
363
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414

415
416
417
418

419
420
421
422
423
424
425
426







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+



-
+







        Ok(ArgpRes::RunApp(RunCtx::new(&self.svcname)))
      }
    }
  }

  /// Process command line arguments.
  ///
  /// Calling this method will initialize the command line parser, parse the
  /// command line, using the associated [`ArgsProc`] as appropriate to modify
  /// the argument parser, and then take the appropriate action:
  ///
  /// - If a service registration was requested, the service will be registered
  ///   and then the function will return.
  /// - If a service deregistration was requested, the service will be
  ///   deregistered and then the function will return.
  /// - If a service run was requested, then set up the service subsystem and
  ///   launch the server application under it.
  /// - If an application-defined subcommand was called, then process it using
  ///   [`ArgsProc::proc_other()`] and then exit.
  /// - If none of the above subcommands where issued, then run the server
  ///   application as a foreground process.
  ///
  /// The `bldr` is a closure that will be called to yield the `SvcType` in
  /// The `bldr` is a closure that will be called to yield the `SrvAppRt` in
  /// case the service was requested to run.
  pub fn proc<F>(mut self, bldr: F) -> Result<(), Error>
  where
    F: FnOnce() -> SvcType
    F: FnOnce() -> SrvAppRt
  {
    // Create registration subcommand
    let sub = mk_inst_cmd(&self.reg_subcmd, &self.svcname);
    self.cli = self.cli.subcommand(sub);

    // Create deregistration subcommand
    let sub = mk_rm_cmd(&self.dereg_subcmd, &self.svcname);

Changes to src/err.rs.

1

2
3
4
5
6
7
8
9





































10
11
12






13
14
15
16
17
18
19
20
21
22
23
24





25

26
27







28
29
30

31



































32
33
34
35
36
37
38
39
40
41
42
43
44
45
46




47
48
49
50
51
52
53



54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69



70
71
72
73
74
75
76













77
78
79
80
81
82
83

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
-
+








+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+



+
+
+
+
+
+












+
+
+
+
+

+


+
+
+
+
+
+
+



+

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+















+
+
+
+







+
+
+
















+
+
+







+
+
+
+
+
+
+
+
+
+
+
+
+







use std::{fmt, io};
use std::{any::Any, fmt, io};

#[derive(Debug)]
pub enum ArgsError {
  #[cfg(feature = "clap")]
  Clap(clap::Error),
  Msg(String)
}


/// Indicate failure for server applicatino callbacks.
#[derive(Debug, Default)]
pub struct AppErrors {
  pub init: Option<AppErr>,
  pub run: Option<AppErr>,
  pub shutdown: Option<AppErr>
}

impl AppErrors {
  pub fn init_failed(&self) -> bool {
    self.init.is_some()
  }

  pub fn run_failed(&self) -> bool {
    self.run.is_some()
  }

  pub fn shutdown_failed(&self) -> bool {
    self.shutdown.is_some()
  }

  pub fn origin(&self) -> CbOrigin {
    if self.init_failed() {
      CbOrigin::Init
    } else if self.run_failed() {
      CbOrigin::Run
    } else if self.shutdown_failed() {
      CbOrigin::Shutdown
    } else {
      // Can't happen
      unimplemented!()
    }
  }
}


/// Errors that qsu will return to application.
#[derive(Debug)]
pub enum Error {
  /// Application-defined error.
  ///
  /// Applications can use this variant to pass application-specific errors
  /// through the runtime back to itself.
  App(CbOrigin, AppErr),

  ArgP(ArgsError),
  BadFormat(String),
  Internal(String),
  IO(String),

  /// An error related to logging occurred.
  ///
  /// This includes both initialization and actual logging.
  ///
  /// On Windows errors such as failure to register an event source will be
  /// treated as this error variant as well.
  LumberJack(String),

  /// Missing expected data.
  Missing(String),

  /// Rocket-specific errors.
  #[cfg(feature = "rocket")]
  #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
  Rocket(String),
  SubSystem(String),

  /// Returned by [`RunCtx::run()`](crate::rt::RunCtx) to indicate which
  /// server application callbacks that failed.
  #[cfg(feature = "rt")]
  #[cfg_attr(docsrs, doc(cfg(feature = "rt")))]
  SrvApp(AppErrors),

  Unsupported
}


impl Error {
  pub fn is_apperr(&self) -> bool {
    matches!(self, Error::App(_, _))
  }

  /// Attempt to convert [`Error`] into application-specific error.
  ///
  /// If it's not an `Error::App()` nor can be downcast to type `E`, the error
  /// will be returned back as an `Err()`.
  pub fn try_into_apperr<E: 'static>(self) -> Result<(CbOrigin, E), Error> {
    match self {
      Error::App(origin, e) => match e.try_into_inner::<E>() {
        Ok(e) => Ok((origin, e)),
        Err(e) => Err(Error::App(origin, AppErr::new(e)))
      },
      e => Err(e)
    }
  }

  /// Unwrap application-specific error from an [`Error`].
  ///
  /// # Panic
  /// Panics if `Error` variant is not `Error::App()`.
  pub fn unwrap_apperr<E: 'static>(self) -> (CbOrigin, E) {
    let Ok((origin, e)) = self.try_into_apperr::<E>() else {
      panic!("Unable to unwrap error E");
    };
    (origin, e)
  }

  /*
  pub(crate) fn app<E: Send + 'static>(origin: CbOrigin, e: E) -> Self {
    Error::App(origin, AppErr::new(e))
  }
  */

  pub fn bad_format<S: ToString>(s: S) -> Self {
    Error::BadFormat(s.to_string())
  }

  pub fn internal<S: ToString>(s: S) -> Self {
    Error::Internal(s.to_string())
  }

  pub fn io<S: ToString>(s: S) -> Self {
    Error::IO(s.to_string())
  }

  pub fn lumberjack<S: ToString>(s: S) -> Self {
    Error::LumberJack(s.to_string())
  }

  pub fn missing<S: ToString>(s: S) -> Self {
    Error::Missing(s.to_string())
  }
}

impl std::error::Error for Error {}

impl fmt::Display for Error {
  fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
    match self {
      Error::App(_origin, _e) => {
        write!(f, "Application-defined error")
      }
      Error::ArgP(s) => {
        // ToDo: Handle the ArgsError::Clap and ArgsError::Msg differently
        write!(f, "Argument parser; {:?}", s)
      }
      Error::BadFormat(s) => {
        write!(f, "Bad format error; {}", s)
      }
      Error::Internal(s) => {
        write!(f, "Internal error; {}", s)
      }
      Error::IO(s) => {
        write!(f, "I/O error; {}", s)
      }
      Error::LumberJack(s) => {
        write!(f, "LumberJack error; {}", s)
      }
      Error::Missing(s) => {
        write!(f, "Missing data; {}", s)
      }
      #[cfg(feature = "rocket")]
      Error::Rocket(s) => {
        write!(f, "Rocket error; {}", s)
      }
      Error::SubSystem(s) => {
        write!(f, "Service subsystem error; {}", s)
      }
      Error::SrvApp(e) => {
        let mut v = vec![];
        if e.init.is_some() {
          v.push("init");
        }
        if e.run.is_some() {
          v.push("run");
        }
        if e.shutdown.is_some() {
          v.push("shutdown");
        }
        write!(f, "Server application failed [{}]", v.join(","))
      }
      Error::Unsupported => {
        write!(f, "Operation is unsupported [on this platform]")
      }
    }
  }
}

137
138
139
140
141
142
143




















































































144
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+

#[cfg(windows)]
impl From<windows_service::Error> for Error {
  fn from(err: windows_service::Error) -> Self {
    Error::SubSystem(err.to_string())
  }
}


/// An error type used to store an application-specific error.
///
/// Application call-backs return this type for the `Err()` case in order to
/// allow the errors to be passed back to the application call that started the
/// service runtime.
#[repr(transparent)]
#[derive(Debug)]
pub struct AppErr(Box<dyn Any + Send + 'static>);

impl AppErr {
  pub fn new<E>(e: E) -> Self
  where
    E: Send + 'static
  {
    Self(Box::new(e))
  }

  /// Attempt to unpack and cast the inner error type.
  ///
  /// If it can't be downcast to `E`, `AppErr` will be returned in the `Err()`
  /// case.
  ///
  /// ```
  /// use qsu::AppErr;
  ///
  /// enum MyErr {
  ///   Something(String)
  /// }
  /// let apperr = AppErr::new(MyErr::Something("hello".into()));
  ///
  /// let Ok(e) = apperr.try_into_inner::<MyErr>() else {
  ///   panic!("Unexpectedly not MyErr");
  /// };
  /// ```
  pub fn try_into_inner<E: 'static>(self) -> Result<E, AppErr> {
    match self.0.downcast::<E>() {
      Ok(e) => Ok(*e),
      Err(e) => Err(AppErr(e))
    }
  }

  /// Unwrap application-specific error from an [`Error`](crate::err::Error).
  ///
  /// ```
  /// use qsu::AppErr;
  ///
  /// enum MyErr {
  ///   Something(String)
  /// }
  /// let apperr = AppErr::new(MyErr::Something("hello".into()));
  ///
  /// let MyErr::Something(e) = apperr.unwrap_inner::<MyErr>() else {
  ///   panic!("Unexpectedly not MyErr::Something");
  /// };
  /// assert_eq!(e, "hello");
  /// ```
  ///
  /// # Panic
  /// Panics if the inner type is not castable to `E`.
  pub fn unwrap_inner<E: 'static>(self) -> E {
    let Ok(e) = self.0.downcast::<E>() else {
      panic!("Unable to downcast to error type E");
    };
    *e
  }
}

/// Origin of an application callback error.
#[derive(Debug)]
pub enum CbOrigin {
  /// The application error occurred in the service handler's `init()`
  /// callback.
  Init,

  /// The application error occurred in the service handler's `run()`
  /// callback.
  Run,

  /// The application error occurred in the service handler's `shutdown()`.
  /// callback.
  Shutdown
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to src/installer/winsvc.rs.

1
2
3
4
5
6
7
8
9
10
11
12

13


14
15
16
17
18
19
20
1
2
3
4
5
6
7
8
9
10
11
12
13

14
15
16
17
18
19
20
21
22












+
-
+
+







use std::{cell::RefCell, ffi::OsString, thread, time::Duration};

use windows_service::{
  service::{
    ServiceAccess, ServiceDependency, ServiceErrorControl, ServiceInfo,
    ServiceStartType, ServiceState, ServiceType
  },
  service_manager::{ServiceManager, ServiceManagerAccess}
};

use crate::{
  err::Error,
  rt::winsvc::{
  winsvc::{create_service_params, write_service_subkey}
    create_service_params, get_service_params_subkey, write_service_subkey
  }
};


pub fn install(ctx: super::RegSvc) -> Result<(), Error> {
  let svcname = &ctx.svcname;

  // Create a refrence cell used to keep track of whether to keep system
123
124
125
126
127
128
129



130
131
132
133
134
135
136
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141







+
+
+







    key.set_value("Environment", &envs)?;
  }

  //println!("==> Service installation successful");

  let mut params = create_service_params(svcname)?;

  // Just so the uninstaller will accept this service
  params.set_value("Installer", &"qsu")?;

  if let Some(wd) = ctx.workdir {
    params.set_value("WorkDir", &wd)?;
  }

  if let Some(ll) = ctx.log_level {
    params.set_value("LogLevel", &ll.to_string())?;
  }
150
151
152
153
154
155
156











157




















158
159
160
161
162
163
164
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200







+
+
+
+
+
+
+
+
+
+
+

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+







  // Mark status as success so the scopeguards won't attempt to reverse the
  // changes.
  *status.borrow_mut() = true;

  Ok(())
}


/// Deregister a system service.
///
/// # Constraints
/// Uninstalling the wrong service can have spectacularly bad side-effects, so
/// qsu goes to some lengths to ensure that only services it installed itself
/// can be uninstalled.
///
/// Before attempting an actual uninstallation, this function will verify that
/// under the service's `Parameters` subkey there is an `Installer` key with
/// the value `qsu`.
pub fn uninstall(svcname: &str) -> Result<(), Error> {
  // Only allow uninstallation of services that have an Installer=qsu key in
  // its Parameters subkey.
  if let Ok(params) = get_service_params_subkey(svcname) {
    if let Ok(val) = params.get_value::<String, &str>("Installer") {
      if val != "qsu" {
        Err(Error::missing(
          "Refusing to uninstall service that doesn't appear to be installed \
           by qsu"
        ))?;
      }
    } else {
      Err(Error::missing(
        "Service Parameters does not have a Installer key."
      ))?;
    }
  } else {
    Err(Error::missing("Service does not have a Parameters subkey."))?;
  }


  let manager_access = ServiceManagerAccess::CONNECT;
  let service_manager =
    ServiceManager::local_computer(None::<&str>, manager_access)?;
  let service_access =
    ServiceAccess::QUERY_STATUS | ServiceAccess::STOP | ServiceAccess::DELETE;
  let service = service_manager.open_service(&svcname, service_access)?;

Changes to src/lib.rs.

1
2
3
4
5


































6
7
8
9
10
11
12
13




14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

31
32
33
34
35
36
37
38
39

40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118

119
120
121
122
123
124
125
126
127
128
129
130

131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214

215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339

340
341
342
343
344
345


346
347
348

349
350
351
352

353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379





1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39



40
41
42
43
44
45
46
47
48
49
50
51
52








53


54
55
56
57
58


59
60














































































61












62






























































63





















64





























































































































65






66
67



68
69
70
71

72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92






93
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+





-
-
-
+
+
+
+









-
-
-
-
-
-
-
-
+
-
-





-
-
+

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
+
+
-
-
-
+



-
+




















-
-
-
-
-
-

//! _qsu_ is primarily a thin layer between the server application code and the
//! operating system's service subsystem.  When the application is not running
//! under a service subsystem qsu may simulate parts of one so that the server
//! application code does not need to diverge too far between the service and
//! non-service cases.
//! _qsu_ is a set of tools for integrating a server application against a
//! service subsystem (such as
//! [Windows Services](https://learn.microsoft.com/en-us/windows/win32/services/services) [systemd](https://systemd.io/), or
//! [launchd](https://www.launchd.info/)).
//!
//! It offers a thin runtime wrapper layer with the purpose of abstracting away
//! differences between service subsystems (and also provides the same
//! interface when running the server application as a foreground process).
//! More information about the wrapper runtime can be found in the [rt] module
//! documentation.
//!
//! In addition _qsu_ offers helper functions to register/deregister an
//! executable with the system's service subsystem.  These are documented
//! [installer] module.
//!
//! And finally it offers an argument parser to offer basic service
//! registration/deregistration and running using a consistent command line
//! interface.  These are documented in the [argp] module.
//!
//!
//! # Features
//! | Feature     | Function
//! |-------------|----------
//! | `clap`      | Enable `clap` (argument parser) integration.
//! | `installer` | Tools for registering/deregistering services.
//! | `rt`        | Service wrapper (enabled by default).
//! | `systemd`   | systemd integration support.
//! | `tokio`     | Tokio server application type support.
//! | `rocket`    | Rocket server application type support.
//!
//! In addition there's a special `wait-for-debugger` feature that is only used
//! on Windows.  It will make the service runtime halt and wait for a debugger
//! to attach just before starting the Windows Service runtime.  Once a
//! debugger has attached, it will voluntarily trigger a breakpoint.

#![cfg_attr(docsrs, feature(doc_cfg))]

mod err;
mod lumberjack;
mod nosvc;
mod rttype;
pub mod signals;

#[cfg(feature = "rt")]
#[cfg_attr(docsrs, doc(cfg(feature = "rt")))]
pub mod rt;

#[cfg(feature = "clap")]
#[cfg_attr(docsrs, doc(cfg(feature = "clap")))]
pub mod argp;

#[cfg(feature = "installer")]
#[cfg_attr(docsrs, doc(cfg(feature = "installer")))]
pub mod installer;

#[cfg(all(target_os = "linux", feature = "systemd"))]
#[cfg_attr(docsrs, doc(cfg(feature = "systemd")))]
mod systemd;

#[cfg(windows)]
pub mod winsvc;

use std::{ffi::OsStr, path::Path, sync::Arc};
use std::{ffi::OsStr, path::Path};

use tokio::{runtime, sync::broadcast};

pub use async_trait::async_trait;

pub use lumberjack::LumberJack;

pub use crate::err::Error;

pub use err::{AppErr, CbOrigin, Error};

/// Report the current startup/shutdown state to the platform service
/// subsystem.
pub(crate) trait StateReporter {
  fn starting(&self, checkpoint: u32);

  fn started(&self);

  fn stopping(&self, checkpoint: u32);

  fn stopped(&self);
}


/// Report startup checkpoints to the service subsystem.
pub struct StartState {
  sr: Arc<dyn StateReporter + Send + Sync>
}
impl StartState {
  pub fn report(&self, checkpoint: u32) {
    self.sr.starting(checkpoint);
  }
}

/// Report shutdown checkpoints to the service subsystem.
pub struct StopState {
  sr: Arc<dyn StateReporter + Send + Sync>
}
impl StopState {
  pub fn report(&self, checkpoint: u32) {
    self.sr.stopping(checkpoint);
  }
}


/// "Synchronous" (non-`async`) server application.
pub trait ServiceHandler {
  fn init(&mut self, ss: StartState) -> Result<(), Error>;

  fn run(&mut self, ser: SvcEvtReader) -> Result<(), Error>;

  fn shutdown(&mut self, ss: StopState) -> Result<(), Error>;
}


/// Tokio (`async`) server application.
#[async_trait]
pub trait TokioServiceHandler {
  async fn init(&mut self, ss: StartState) -> Result<(), Error>;

  async fn run(&mut self, ser: SvcEvtReader) -> Result<(), Error>;

  async fn shutdown(&mut self, ss: StopState) -> Result<(), Error>;
}


/// Rocket server application handler.
///
/// While Rocket is built on top of tokio, it \[Rocket\] wants to initialize
/// tokio itself.
///
/// There are two major ways to write Rocket services using qsu; either the
/// application can let qsu be aware of the server applications' `Rocket`
/// instances.  It does this by creating the `Rocket` instances in
/// `RocketServiceHandler::init()` and returns them.  _qsu_ will ignite these
/// rockets and pass them to `RocketServiceHandler::run()`.  The application is
/// responsible for launching the rockets at this point.
///
/// The other way to do it is to completely manage the `Rocket` instances in
/// application code (by not returning rocket instances from `init()`).
///
/// Allowing _qsu_ to manage the `Rocket` instances will cause _qsu_ to request
/// graceful shutdown of all `Rocket` instances once a `SvcEvt::Shutdown` is
/// sent by the runtime.
///
/// It is recommended that `ctrlc` shutdown and termination signals are
/// disabled in each `Rocket` instance's configuration, and allow the _qsu_
/// runtime to be responsible for initiating the `Rocket` shutdown.
#[cfg(feature = "rocket")]
#[cfg(feature = "tokio")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
#[async_trait]
pub trait RocketServiceHandler {
  /// Rocket service initialization.
  ///
  /// The returned `Rocket`s will be ignited and their shutdown handlers will
  /// be triggered on shutdown.
  async fn init(
    &mut self,
    ss: StartState
  ) -> Result<Vec<rocket::Rocket<rocket::Build>>, Error>;

pub use tokio;
  async fn run(
    &mut self,
    rockets: Vec<rocket::Rocket<rocket::Ignite>>,
    ser: SvcEvtReader
  ) -> Result<(), Error>;

  async fn shutdown(&mut self, ss: StopState) -> Result<(), Error>;
}


/// Event notifications that originate from the service subsystem that is
/// controlling the server application.
#[derive(Clone, Debug)]
pub enum SvcEvt {
  /// Service subsystem has requested that the server application should pause
  /// its operations.
  ///
  /// Only the Windows service subsystem will emit these events.
  Pause,

  /// Service subsystem has requested that the server application should
  /// resume its operations.
  ///
  /// Only the Windows service subsystem will emit these events.
  Resume,

  /// Service subsystem has requested that the services configuration should
  /// be reread.
  ///
  /// On Unixy platforms this is triggered by SIGHUP, and is unsupported on
  /// Windows.
  ReloadConf,

  /// The service subsystem (or equivalent) has requested that the service
  /// shutdown.
  Shutdown,

  Terminate
}


/// Channel end-point used to receive events from the service subsystem.
pub struct SvcEvtReader {
  rx: broadcast::Receiver<SvcEvt>
}

impl SvcEvtReader {
  /// Block and wait for an event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub fn recv(&mut self) -> Option<SvcEvt> {
    self.rx.blocking_recv().ok()
  }

  /// Attemt to get next event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub fn try_recv(&mut self) -> Option<SvcEvt> {
    self.rx.try_recv().ok()
  }

  /// Async wait for an event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub async fn arecv(&mut self) -> Option<SvcEvt> {
    self.rx.recv().await.ok()
  }
}


/// The types of service types supported.
pub enum SvcType {
  Sync(Box<dyn ServiceHandler + Send>),

  /// Initializa a tokio runtime.
  Tokio(
    Option<runtime::Builder>,
    Box<dyn TokioServiceHandler + Send>
  ),

  #[cfg(feature = "rocket")]
#[cfg(feature = "rocket")]
  #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
  /// Rocket 0.5rc.3 insists on initializing tokio itself.
  Rocket(Box<dyn RocketServiceHandler + Send>)
}


/// Service configuration context.
pub struct RunCtx {
  service: bool,
  svcname: String
}

impl RunCtx {
  /// Run as a systemd service.
  #[cfg(all(target_os = "linux", feature = "systemd"))]
  fn systemd(_svcname: &str, st: SvcType) -> Result<(), Error> {
    LumberJack::default().init()?;

    let reporter = Arc::new(systemd::ServiceReporter {});

    let res = match st {
      SvcType::Sync(handler) => rttype::sync_main(handler, reporter, None),
      SvcType::Tokio(rtbldr, handler) => {
        rttype::tokio_main(rtbldr, handler, reporter, None)
      }
      SvcType::Rocket(handler) => rttype::rocket_main(handler, reporter, None)
    };

    res
  }

  /// Run as a Windows service.
  #[cfg(windows)]
  fn winsvc(svcname: &str, st: SvcType) -> Result<(), Error> {
    winsvc::run(svcname, st)?;

    Ok(())
  }

  /// Run as a foreground server
  fn foreground(_svcname: &str, st: SvcType) -> Result<(), Error> {
    LumberJack::default().init()?;

    let reporter = Arc::new(nosvc::ServiceReporter {});

    match st {
      SvcType::Sync(handler) => rttype::sync_main(handler, reporter, None),
      SvcType::Tokio(rtbldr, handler) => {
        rttype::tokio_main(rtbldr, handler, reporter, None)
      }

      #[cfg(feature = "rocket")]
      SvcType::Rocket(handler) => rttype::rocket_main(handler, reporter, None)
    }
  }
}

impl RunCtx {
  pub fn new(name: &str) -> Self {
    Self {
      service: false,
      svcname: name.into()
    }
  }

  pub fn service(mut self) -> Self {
    self.service = true;
    self
  }

  pub fn service_ref(&mut self) -> &mut Self {
    self.service = true;
    self
  }

  /// Launch the application.
  ///
  /// If this `RunCtx` has been marked as a _service_ then it will perform the
  /// appropriate service subsystem integration before running the actual
  /// server application code.
  ///
  /// This function must only be called from the main thread of the process,
  /// and must be called before any other threads are started.
  pub fn run(self, st: SvcType) -> Result<(), Error> {
    if self.service {
      #[cfg(all(target_os = "linux", feature = "systemd"))]
      Self::systemd(&self.svcname, st)?;

      #[cfg(windows)]
      Self::winsvc(&self.svcname, st)?;

      // ToDo: We should check for other platforms here (like macOS/launchd)
    } else {
      // Do not run against any specific service subsystem.  Despite its name
      // this isn't necessarily running as a foreground process; some service
      // subsystems do not make a distinction.  Perhaps a better mental model
      // is that certain service subsystems expects to run regular "foreground"
      // processes.
      Self::foreground(&self.svcname, st)?;
    }

    Ok(())
  }

  /// Convenience method around [`Self::run()`] using [`SvcType::Sync`].
  pub fn run_sync(
    self,
    handler: Box<dyn ServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SvcType::Sync(handler))
  }

  /// Convenience method around [`Self::run()`] using [`SvcType::Tokio`].
  //#[cfg(feature = "tokio")]
  pub fn run_tokio(
    self,
    rtbldr: Option<runtime::Builder>,
    handler: Box<dyn TokioServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SvcType::Tokio(rtbldr, handler))
  }

  /// Convenience method around [`Self::run()`] using [`SvcType::Rocket`].
  #[cfg(feature = "rocket")]
  pub fn run_rocket(
pub use rocket;
    self,
    handler: Box<dyn RocketServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SvcType::Rocket(handler))
  }
}




/// Attempt to determine a default service name.
/// Attempt to derive a default service name based on the executable's name.
///
/// The idea is to get the current executable's file name and strip it's
/// extension (if there is one).  The file stem name is the default service
/// name.
/// name.  On macos the name will be prefixed by `local.`.
pub fn default_service_name() -> Option<String> {
  let binary_path = ::std::env::current_exe().ok()?;

  let name = binary_path.file_name()?;
  let name = Path::new(name);
  let name = name.file_stem()?;

  mkname(name)
}

#[cfg(not(target_os = "macos"))]
fn mkname(nm: &OsStr) -> Option<String> {
  nm.to_str().map(String::from)
}

#[cfg(target_os = "macos")]
fn mkname(nm: &OsStr) -> Option<String> {
  nm.to_str().map(|x| format!("local.{}", x))
}


pub fn leak_default_service_name() -> Option<&'static str> {
  let svcname = default_service_name()?;
  Some(svcname.leak())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to src/lumberjack.rs.

22
23
24
25
26
27
28

29
30
31
32
33
34
35
36











37
38
39
40
41
42
43
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55







+








+
+
+
+
+
+
+
+
+
+
+








  #[cfg(windows)]
  WinEvtLog { svcname: String }
}

/// Logging and tracing initialization.
pub struct LumberJack {
  init: bool,
  log_out: LogOut,
  log_level: LogLevel,
  trace_level: LogLevel,
  //log_file: Option<PathBuf>,
  trace_file: Option<PathBuf>
}

impl Default for LumberJack {
  /// Create a default log/trace initialization.
  ///
  /// This will set the `log` log level to the value of the `LOG_LEVEL`
  /// environment variable, or default to `warm` (if either not set or
  /// invalid).
  ///
  /// The `tracing` trace level will use the environment variable `TRACE_LEVEL`
  /// in a similar manner, but defaults to `off`.
  ///
  /// If the environment variable `TRACE_FILE` is set the value will be the
  /// used as the file name to write the trace logs to.
  fn default() -> Self {
    let log_level = if let Ok(level) = std::env::var("LOG_LEVEL") {
      if let Ok(level) = level.parse::<LogLevel>() {
        level
      } else {
        LogLevel::Warn
      }
58
59
60
61
62
63
64

65
66
67
68
69
70
71
72
73
74

75
76
77















78
79
80
81
82

83
84
85
86
87
88
89
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110

111
112
113
114
115
116
117
118







+










+



+
+
+
+
+
+
+
+
+
+
+
+
+
+
+




-
+







        LogLevel::Off
      }
    } else {
      LogLevel::Off
    };

    Self {
      init: true,
      log_out: Default::default(),
      log_level,
      trace_level,
      //log_file: None,
      trace_file
    }
  }
}

impl LumberJack {
  /// Create a [`LumberJack::default()`] object.
  pub fn new() -> Self {
    Self::default()
  }

  /// Do not initialize logging/tracing.
  ///
  /// This is useful when running tests.
  pub fn noinit() -> Self {
    Self {
      init: false,
      ..Default::default()
    }
  }

  pub fn set_init(mut self, flag: bool) -> Self {
    self.init = flag;
    self
  }

  /// Load logging/tracing information from a service Parameters subkey.
  #[cfg(windows)]
  pub fn from_winsvc(svcname: &str) -> Result<Self, Error> {
    let params = crate::winsvc::get_service_param(svcname)?;
    let params = crate::rt::winsvc::get_service_param(svcname)?;
    let loglevel = params
      .get_value::<String, &str>("LogLevel")
      .unwrap_or(String::from("warn"))
      .parse::<LogLevel>()
      .unwrap_or(LogLevel::Warn);
    let tracelevel = params.get_value::<String, &str>("TraceLevel");
    let tracefile = params.get_value::<String, &str>("TraceFile");
99
100
101
102
103
104
105

106
107
108
109
110

111
112
113
114
115


116
117
118
119
120
121
122
123

124

125
126
127
128
129
130
131
132
133
134










135
136
137
138
139
140





141
142




143
144
145
146
147
148
149
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159










160
161
162
163
164
165
166
167
168
169
170





171
172
173
174
175
176

177
178
179
180
181
182
183
184
185
186
187







+





+





+
+








+

+
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+

-
-
-
-
-
+
+
+
+
+

-
+
+
+
+







    } else {
      this
    };

    Ok(this)
  }

  /// Set the `log` logging level.
  pub fn log_level(mut self, level: LogLevel) -> Self {
    self.log_level = level;
    self
  }

  /// Set the `tracing` log level.
  pub fn trace_level(mut self, level: LogLevel) -> Self {
    self.trace_level = level;
    self
  }

  /// Set a file to which `tracing` log entries are written (rather than to
  /// write to console).
  pub fn trace_file<P>(mut self, fname: P) -> Self
  where
    P: AsRef<Path>
  {
    self.trace_file = Some(fname.as_ref().to_path_buf());
    self
  }

  /// Commit requested settings to `log` and `tracing`.
  pub fn init(self) -> Result<(), Error> {
    if self.init {
    match self.log_out {
      LogOut::Console => {
        init_console_logging()?;
      }
      #[cfg(windows)]
      LogOut::WinEvtLog { svcname } => {
        eventlog::init(&svcname, log::Level::Trace)?;
        log::set_max_level(self.log_level.into());
      }
    }
      match self.log_out {
        LogOut::Console => {
          init_console_logging()?;
        }
        #[cfg(windows)]
        LogOut::WinEvtLog { svcname } => {
          eventlog::init(&svcname, log::Level::Trace)?;
          log::set_max_level(self.log_level.into());
        }
      }

    if let Some(fname) = self.trace_file {
      init_file_tracing(fname, self.trace_level);
    } else {
      init_console_tracing(self.trace_level);
    }
      if let Some(fname) = self.trace_file {
        init_file_tracing(fname, self.trace_level);
      } else {
        init_console_tracing(self.trace_level);
      }

    Ok(())
      Ok(())
    } else {
      Ok(())
    }
  }
}


#[derive(Debug, PartialEq, Eq, Clone, Copy)]
#[cfg_attr(feature = "clap", derive(ValueEnum))]
pub enum LogLevel {

Deleted src/nosvc.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19



















-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! Not a service module.
//!
//! It might seem odd to have a no-service in a service library, which is true.
//! But it simplifies to service aplication code to be able to treat all cases
//! (roughly) equally.

pub struct ServiceReporter {}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, _checkpoint: u32) {}

  fn started(&self) {}

  fn stopping(&self, _checkpoint: u32) {}

  fn stopped(&self) {}
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt.rs.































































































































































































































































































































































































































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Server application wrapper runtime.
//!
//! # Overview
//! The _qsu_'s runtime lives in a layer between the actual server application
//! and the "service subsystems" that servier applications can integrate
//! against:
//!
//! <pre>
//! +---------------------+
//! |  Server Application |  (HTTP server, Matrix server, etc)
//! +---------------------+
//! |       qsu rt        |
//! +---------------------+
//! |  Platform service   |  (Windows Service subsystem, systemd, etc)
//! |      subsystem      |
//! +---------------------+
//! </pre>
//!
//! The primary goal of the _qsu_ runtime is to provide a consistent interface
//! that will be mapped to whatever service subsystem it is actually running
//! on.  This including no special runtime at all; such as when running the
//! server application as a regular foreground process (which is common during
//! development).
//!
//! # How server applications are implemented and used
//! _qsu_ needs to know what kind of runtime the server application expects.
//! Server applications pick the runtime type by implementing a trait, of which
//! there are currently three recognized types:
//! - [`ServiceHandler`] is used for "non-async" server applications.
//! - [`TokioServiceHandler`] is used for server applications that run under
//!  the tokio executor.
//! - [`RocketServiceHandler`] is for server applications that are built on top
//!   of the Rocket HTTP framework.
//!
//! Each of these implement three methods:
//! - `init()` is for initializing the service.
//! - `run()` is for running the actual server application.
//! - `shutdown()` is for shutting down the server application.
//!
//! The actual trait methods may look quite different, depending on the trait
//! being used.
//!
//! Once a service trait has been implemented, the application creates a
//! [`RunCtx`] object and calls its [`run()`](RunCtx::run()) method, passing in
//! an service implementation object.
//!
//! Note: Only one service wrapper must be initialized per process.
//!
//! ## Service Handler semantics
//! When a handler is run through the [`RunCtx`] it will first call the
//! handler's `init()` method.  If it returns `Ok()`, its `run()` method will
//! be run.
//!
//! The handler's `shutdown()` will be called regardless of whether `init()` or
//! `run()` was successful (the only precondition for `shutdown()` to be called
//! is that `init()` was called).
//!
//! # Application errors
//! The _qsu_ runtime is initialized and run from an application that is called
//! back to from the _qsu_ runtime.  This has the unfortunate side effect of
//! creating a kind of barrier between the application's "outside" (the part
//! that sets up and runs the _qsu_ runtime) and the "inside" (the service
//! trait callback methods).  Specifically, the problem this causes is that if
//! an error occurs in the "inner" server application code, the "outer"
//! application code may want to know exactly what the inner error was.
//!
//! _qsu_ bridges this gap by providing the [`AppErr`] type for the `Err()`
//! case of the callbacks.  The `AppErr` is a newtype over a boxed `Any` type.
//! In order to get at the original error value from the "inside" the `AppErr`
//! needs to be unwrapped.  See the [`AppErr`] documentation for more
//! information.
//!
//! Presumably the application has it's own `Error` type.  To allow the
//! callbacks to return application-defined errors using the regular question
//! mark, it may be helpful to add the following error conversion to the error
//! module:
//!
//! ```
//! // Application-specific Error type
//! enum Error {
//!   // .. application-defined error variants
//! }
//!
//! impl From<Error> for qsu::AppErr {
//!   fn from(err: Error) -> qsu::AppErr {
//!     qsu::AppErr::new(err)
//!   }
//! }
//! ```
//!
//! # Using the argument parser
//! _qsu_ offers an [argument parser](crate::argp::ArgParser), which can
//! abstract away much of the runtime management.

mod nosvc;
mod rttype;
mod signals;

#[cfg(all(target_os = "linux", feature = "systemd"))]
#[cfg_attr(docsrs, doc(cfg(feature = "systemd")))]
mod systemd;

#[cfg(windows)]
pub mod winsvc;

use std::sync::{
  atomic::{AtomicU32, Ordering},
  Arc
};

#[cfg(any(feature = "tokio", feature = "rocket"))]
use async_trait::async_trait;

#[cfg(feature = "tokio")]
use tokio::runtime;

use tokio::sync::broadcast;


use crate::{err::Error, lumberjack::LumberJack, AppErr};


/// Used to pass an optional message to the service subsystem whenever a
/// startup or shutdown checkpoint as been reached.
pub enum StateMsg {
  Ref(&'static str),
  Owned(String)
}

impl From<&'static str> for StateMsg {
  fn from(msg: &'static str) -> Self {
    Self::Ref(msg)
  }
}

impl From<String> for StateMsg {
  fn from(msg: String) -> Self {
    Self::Owned(msg)
  }
}

impl AsRef<str> for StateMsg {
  fn as_ref(&self) -> &str {
    match self {
      StateMsg::Ref(s) => s,
      StateMsg::Owned(s) => s
    }
  }
}


/// Report the current startup/shutdown state to the platform service
/// subsystem.
pub(crate) trait StateReporter {
  fn starting(&self, checkpoint: u32, msg: Option<StateMsg>);

  fn started(&self);

  fn stopping(&self, checkpoint: u32, msg: Option<StateMsg>);

  fn stopped(&self);
}


/// Report startup checkpoints to the service subsystem.
///
/// An instance of this is handed to the server application through the service
/// handler's `init()` trait method.
pub struct StartState {
  sr: Arc<dyn StateReporter + Send + Sync>,
  cnt: Arc<AtomicU32>
}
impl StartState {
  pub fn report(&self, status: Option<StateMsg>) {
    let checkpoint = self.cnt.fetch_add(1, Ordering::SeqCst);
    self.sr.starting(checkpoint, status);
  }
}

/// Report shutdown checkpoints to the service subsystem.
///
/// An instance of this is handed to the server application through the service
/// handler's `shutdown()` trait method.
pub struct StopState {
  sr: Arc<dyn StateReporter + Send + Sync>,
  cnt: Arc<AtomicU32>
}
impl StopState {
  pub fn report(&self, status: Option<StateMsg>) {
    let checkpoint = self.cnt.fetch_add(1, Ordering::SeqCst);
    self.sr.stopping(checkpoint, status);
  }
}


/// "Synchronous" (non-`async`) server application.
///
/// Implement this for an object that wraps a server application that does not
/// use an async runtime.
pub trait ServiceHandler {
  fn init(&mut self, ss: StartState) -> Result<(), AppErr>;

  fn run(&mut self, ser: SvcEvtReader) -> Result<(), AppErr>;

  fn shutdown(&mut self, ss: StopState) -> Result<(), AppErr>;
}


/// `async` server application built on the tokio runtime.
///
/// Implement this for an object that wraps a server application that uses
/// tokio as an async runtime.
#[cfg(feature = "tokio")]
#[cfg_attr(docsrs, doc(cfg(feature = "tokio")))]
#[async_trait]
pub trait TokioServiceHandler {
  async fn init(&mut self, ss: StartState) -> Result<(), AppErr>;

  async fn run(&mut self, ser: SvcEvtReader) -> Result<(), AppErr>;

  async fn shutdown(&mut self, ss: StopState) -> Result<(), AppErr>;
}


/// Rocket server application handler.
///
/// While Rocket is built on top of tokio, it \[Rocket\] wants to initialize
/// tokio itself.
///
/// There are two major ways to write Rocket services using qsu; either the
/// application can let qsu be aware of the server applications' `Rocket`
/// instances.  It does this by creating the `Rocket` instances in
/// `RocketServiceHandler::init()` and returns them.  _qsu_ will ignite these
/// rockets and pass them to `RocketServiceHandler::run()`.  The application is
/// responsible for launching the rockets at this point.
///
/// The other way to do it is to completely manage the `Rocket` instances in
/// application code (by not returning rocket instances from `init()`).
///
/// Allowing _qsu_ to manage the `Rocket` instances will cause _qsu_ to request
/// graceful shutdown of all `Rocket` instances once a `SvcEvt::Shutdown` is
/// sent by the runtime.
///
/// It is recommended that `ctrlc` shutdown and termination signals are
/// disabled in each `Rocket` instance's configuration, and allow the _qsu_
/// runtime to be responsible for initiating the `Rocket` shutdown.
#[cfg(feature = "rocket")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
#[async_trait]
pub trait RocketServiceHandler {
  /// Rocket service initialization.
  ///
  /// The returned `Rocket`s will be ignited and their shutdown handlers will
  /// be triggered on shutdown.
  async fn init(
    &mut self,
    ss: StartState
  ) -> Result<Vec<rocket::Rocket<rocket::Build>>, AppErr>;

  /// Server application main entry point.
  ///
  /// If the `init()` trait method returned `Rocket<Build>` instances to the
  /// qsu runtime it will ignote these and pass them to `run()`.  It is the
  /// responsibility of this method to launch the rockets and await them.
  async fn run(
    &mut self,
    rockets: Vec<rocket::Rocket<rocket::Ignite>>,
    ser: SvcEvtReader
  ) -> Result<(), AppErr>;

  async fn shutdown(&mut self, ss: StopState) -> Result<(), AppErr>;
}


/// Event notifications that originate from the service subsystem that is
/// controlling the server application.
#[derive(Clone, Debug)]
pub enum SvcEvt {
  /// Service subsystem has requested that the server application should pause
  /// its operations.
  ///
  /// Only the Windows service subsystem will emit these events.
  Pause,

  /// Service subsystem has requested that the server application should
  /// resume its operations.
  ///
  /// Only the Windows service subsystem will emit these events.
  Resume,

  /// Service subsystem has requested that the services configuration should
  /// be reread.
  ///
  /// On Unixy platforms this is triggered by SIGHUP, and is unsupported on
  /// Windows.
  ReloadConf,

  /// The service subsystem (or equivalent) has requested that the service
  /// shutdown.
  Shutdown,

  /// The service subsystem (or equivalent) has requested that the service
  /// terminate.
  Terminate
}


/// Channel end-point used to receive events from the service subsystem.
pub struct SvcEvtReader {
  rx: broadcast::Receiver<SvcEvt>
}

impl SvcEvtReader {
  /// Block and wait for an event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub fn recv(&mut self) -> Option<SvcEvt> {
    self.rx.blocking_recv().ok()
  }

  /// Attempt to get next event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub fn try_recv(&mut self) -> Option<SvcEvt> {
    self.rx.try_recv().ok()
  }

  /// Async wait for an event.
  ///
  /// Once `SvcEvt::Shutdown` or `SvcEvt::Terminate` has been received, this
  /// method should not be called again.
  pub async fn arecv(&mut self) -> Option<SvcEvt> {
    self.rx.recv().await.ok()
  }
}


/// The server application runtime type.

pub enum SrvAppRt {
  /// A plain non-async (sometimes referred to as "blocking") server
  /// application.
  Sync(Box<dyn ServiceHandler + Send>),

  /// Initializa a tokio runtime, and call each application handler from an
  /// async context.
  #[cfg(feature = "tokio")]
  #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))]
  Tokio(
    Option<runtime::Builder>,
    Box<dyn TokioServiceHandler + Send>
  ),

  /// Allow Rocket to initialize the tokio runtime, and call each application
  /// handler from an async context.
  #[cfg(feature = "rocket")]
  #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
  Rocket(Box<dyn RocketServiceHandler + Send>)
}


/// Service runner context.
pub struct RunCtx {
  service: bool,
  svcname: String,
  log_init: bool,
  test_mode: bool
}

impl RunCtx {
  /// Run as a systemd service.
  #[cfg(all(target_os = "linux", feature = "systemd"))]
  fn systemd(self, st: SrvAppRt) -> Result<(), Error> {
    LumberJack::default().set_init(self.log_init).init()?;

    tracing::debug!("Running service '{}'", self.svcname);

    let reporter = Arc::new(systemd::ServiceReporter {});

    let res = match st {
      SrvAppRt::Sync(handler) => {
        rttype::sync_main(handler, reporter, None, self.test_mode)
      }
      SrvAppRt::Tokio(rtbldr, handler) => {
        rttype::tokio_main(rtbldr, handler, reporter, None)
      }
      SrvAppRt::Rocket(handler) => rttype::rocket_main(handler, reporter, None)
    };

    res
  }

  /// Run as a Windows service.
  #[cfg(windows)]
  fn winsvc(self, st: SrvAppRt) -> Result<(), Error> {
    winsvc::run(&self.svcname, st)?;

    Ok(())
  }

  /// Run as a foreground server
  fn foreground(self, st: SrvAppRt) -> Result<(), Error> {
    LumberJack::default().set_init(self.log_init).init()?;

    tracing::debug!("Running service '{}'", self.svcname);

    let reporter = Arc::new(nosvc::ServiceReporter {});

    match st {
      SrvAppRt::Sync(handler) => {
        rttype::sync_main(handler, reporter, None, self.test_mode)
      }

      #[cfg(feature = "tokio")]
      SrvAppRt::Tokio(rtbldr, handler) => {
        rttype::tokio_main(rtbldr, handler, reporter, None)
      }

      #[cfg(feature = "rocket")]
      SrvAppRt::Rocket(handler) => rttype::rocket_main(handler, reporter, None)
    }
  }
}

impl RunCtx {
  /// Create a new service running context.
  pub fn new(name: &str) -> Self {
    Self {
      service: false,
      svcname: name.into(),
      log_init: true,
      test_mode: false
    }
  }

  /// Enable test mode.
  ///
  /// This method is intended for tests only.
  ///
  /// qsu performs a few global initialization that will fail if run repeatedly
  /// within the same process.  This causes some problem when running tests,
  /// because rust may run tests in threads within the same process.
  #[doc(hidden)]
  pub fn test_mode(mut self) -> Self {
    self.log_init = false;
    self.test_mode = true;
    self
  }

  /// Disable logging/tracing initialization.
  ///
  /// This is useful in tests because tests may run in different threads within
  /// the same process, causing the log/tracing initialization to panic.
  #[doc(hidden)]
  pub fn log_init(mut self, flag: bool) -> Self {
    self.log_init = flag;
    self
  }

  /// Reference version of [`RunCtx::log_init()`].
  #[doc(hidden)]
  pub fn log_init_ref(&mut self, flag: bool) -> &mut Self {
    self.log_init = flag;
    self
  }

  /// Mark this run context to run under the operating system's subservice, if
  /// one is available on this platform.
  pub fn service(mut self) -> Self {
    self.service = true;
    self
  }

  /// Mark this run context to run under the operating system's subservice, if
  /// one is available on this platform.
  pub fn service_ref(&mut self) -> &mut Self {
    self.service = true;
    self
  }

  /// Launch the application.
  ///
  /// If this `RunCtx` has been marked as a _service_ then it will perform the
  /// appropriate service subsystem integration before running the actual
  /// server application code.
  ///
  /// This function must only be called from the main thread of the process,
  /// and must be called before any other threads are started.
  pub fn run(self, st: SrvAppRt) -> Result<(), Error> {
    if self.service {
      #[cfg(all(target_os = "linux", feature = "systemd"))]
      self.systemd(st)?;

      #[cfg(windows)]
      self.winsvc(st)?;

      // ToDo: We should check for other platforms here (like macOS/launchd)
    } else {
      // Do not run against any specific service subsystem.  Despite its name
      // this isn't necessarily running as a foreground process; some service
      // subsystems do not make a distinction.  Perhaps a better mental model
      // is that certain service subsystems expects to run regular "foreground"
      // processes.
      self.foreground(st)?;
    }

    Ok(())
  }

  /// Convenience method around [`Self::run()`] using [`SrvAppRt::Sync`].
  pub fn run_sync(
    self,
    handler: Box<dyn ServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SrvAppRt::Sync(handler))
  }

  /// Convenience method around [`Self::run()`] using [`SrvAppRt::Tokio`].
  #[cfg(feature = "tokio")]
  #[cfg_attr(docsrs, doc(cfg(feature = "tokio")))]
  pub fn run_tokio(
    self,
    rtbldr: Option<runtime::Builder>,
    handler: Box<dyn TokioServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SrvAppRt::Tokio(rtbldr, handler))
  }

  /// Convenience method around [`Self::run()`] using [`SrvAppRt::Rocket`].
  #[cfg(feature = "rocket")]
  #[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
  pub fn run_rocket(
    self,
    handler: Box<dyn RocketServiceHandler + Send>
  ) -> Result<(), Error> {
    self.run(SrvAppRt::Rocket(handler))
  }
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/nosvc.rs.
























1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Not a service module.
//!
//! It might seem odd to have a no-service in a service library, which is true.
//! But it simplifies to service aplication code to be able to treat all cases
//! (roughly) equally.

use super::StateMsg;

/// A no-op service reporter, used when no service subsystem is being used, or
/// that subsystem does not require/support any incoming notifications.
pub struct ServiceReporter {}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, _checkpoint: u32, _status: Option<StateMsg>) {}

  fn started(&self) {}

  fn stopping(&self, _checkpoint: u32, _status: Option<StateMsg>) {}

  fn stopped(&self) {}
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/rttype.rs.



























1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Runtime types.
//!
//! This module is used to collect the various supported "runtime types" or
//! "run contexts".

mod sync;

#[cfg(feature = "rocket")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
mod rocket;

#[cfg(feature = "tokio")]
#[cfg_attr(docsrs, doc(cfg(feature = "tokio")))]
mod tokio;

pub(crate) use sync::sync_main;

#[cfg(feature = "rocket")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
pub(crate) use rocket::rocket_main;

#[cfg(feature = "tokio")]
#[cfg_attr(docsrs, doc(cfg(feature = "tokio")))]
pub(crate) use tokio::tokio_main;

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/rttype/rocket.rs.































































































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Rocket runtime module.
//!
//! While Rocket uses tokio, it [Rocket] insists on initializing tokio for
//! itself.  Attempting to use the `TokioServiceHandler` will cause `Rocket`s
//! to issue a warning at startup.
//!
//! As a convenience _qsu_ can keep track of rockets and automatically shut
//! them down once the service subsystem requests a shutdown.  To use this
//! feature, the server application should return a `Vec<Rocket<Build>>` from
//! `RocketServiceHandler::init()`.  Any `Rocket` instance in this vec will be
//! ignited before being passed to `RocketServiceHandler::run()`.
//!
//! Server applications do not need to use this feature and should return an
//! empty vector from `init()` in this case.  This also requires the
//! application code to trigger a shutdown of each instance itself.

use std::sync::{atomic::AtomicU32, Arc};

use tokio::{sync::broadcast, task};

use killswitch::KillSwitch;

use crate::{
  err::{AppErrors, Error},
  rt::{
    signals, RocketServiceHandler, StartState, StateReporter, StopState,
    SvcEvt, SvcEvtReader
  }
};


/// Internal "main()" routine for server applications that run one or more
/// Rockets as their main application.
pub(crate) fn rocket_main(
  handler: Box<dyn RocketServiceHandler + Send>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  rocket::execute(rocket_async_main(handler, sr, rx_svcevt))?;

  Ok(())
}

async fn rocket_async_main(
  mut handler: Box<dyn RocketServiceHandler + Send>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let ks = KillSwitch::new();

  // If a SvcEvt receiver end-point was handed to us, then use it.  Otherwise
  // create our own and spawn the monitoring tasks that will generate events
  // for it.
  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    rx_svcevt
  } else {
    // Create channel used to signal events to application
    let (tx, rx) = broadcast::channel(16);

    let ks2 = ks.clone();

    // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event.
    let txc = tx.clone();
    task::spawn(signals::wait_shutdown(
      move || {
        if let Err(e) = txc.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::Shutdown event; {}", e);
        }
      },
      ks2
    ));

    // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a
    // Terminate event.
    let txc = tx.clone();
    let ks2 = ks.clone();
    task::spawn(signals::wait_term(
      move || {
        if let Err(e) = txc.send(SvcEvt::Terminate) {
          log::error!("Unable to send SvcEvt::Terminate event; {}", e);
        }
      },
      ks2
    ));

    // There doesn't seem to be anything equivalent to SIGHUP for Windows
    // (Services)
    #[cfg(unix)]
    {
      let ks2 = ks.clone();

      let txc = tx.clone();
      task::spawn(signals::wait_reload(
        move || {
          if let Err(e) = txc.send(SvcEvt::ReloadConf) {
            log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
          }
        },
        ks2
      ));
    }

    rx
  };

  let mut rx_svcevt2 = rx_svcevt.resubscribe();

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call application's init() method.
  let ss = StartState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let (rockets, init_apperr) = match handler.init(ss).await {
    Ok(rockets) => (rockets, None),
    Err(e) => (Vec::new(), Some(e))
  };

  // Ignite rockets so we can get Shutdown contexts for each of the instances.
  let mut ignited = vec![];
  let mut rocket_shutdowns = vec![];
  for rocket in rockets {
    let rocket = rocket.ignite().await?;
    rocket_shutdowns.push(rocket.shutdown());
    ignited.push(rocket);
  }

  // Set the service's state to "started"
  sr.started();

  // Launch a task that waits for the SvtEvt::Shutdown event.   Once it
  // arrives, tell all rocket instances to gracefully shutdown.
  //
  // Note: We don't want to use the killswitch for this because the killswitch
  // isn't triggered until run() has returned, and we might want the graceful
  // shutdown to be the cause of the graceful shutdowns.
  let jh_graceful_landing = task::spawn(async move {
    loop {
      match rx_svcevt2.recv().await {
        Ok(SvcEvt::Shutdown) => {
          tracing::trace!("Ask rocket instances to shut down gracefully");
          for shutdown in rocket_shutdowns {
            // Tell this rocket instance to shut down gracefully.
            shutdown.notify();
          }
          break;
        }
        Ok(SvcEvt::Terminate) => {
          tracing::trace!("Ask rocket instances to shut down gracefully");
          for shutdown in rocket_shutdowns {
            // Tell this rocket instance to shut down gracefully.
            shutdown.notify();
          }
          break;
        }
        Ok(_) => {
          tracing::trace!("Ignored message in wask waiting for shutdown");
          continue;
        }
        Err(e) => {
          log::error!("Unable to receive broadcast SvcEvt message, {}", e);
          break;
        }
      }
    }
  });

  let run_apperr = if init_apperr.is_none() {
    sr.started();
    handler.run(ignited, *set).await.err()
  } else {
    None
  };


  // Always send the first shutdown checkpoint here.  Either init() failed or
  // run retuned.  Either way, we're shutting down.
  sr.stopping(1, None);

  // Now that the main application has terminated kill off any remaining
  // auxiliary tasks (read: signal waiters)
  ks.trigger();

  // .. and wait for all task that is waiting for a shutdown event to complete
  if let Err(e) = jh_graceful_landing.await {
    log::warn!(
      "An error was returned from the graceful landing task; {}",
      e
    );
  }

  if (ks.finalize().await).is_err() {
    log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet");
  }

  // Call the application's shutdown() function.
  let ss = StopState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let term_apperr = handler.shutdown(ss).await.err();

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  // There can be multiple failures, and we don't want to lose information
  // about what went wrong, so return an error context that can contain all
  // callback errors.
  if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() {
    let apperrs = AppErrors {
      init: init_apperr,
      run: run_apperr,
      shutdown: term_apperr
    };
    Err(Error::SrvApp(apperrs))?;
  }

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/rttype/sync.rs.

















































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::sync::{atomic::AtomicU32, Arc};

use tokio::sync::broadcast;

use crate::{
  err::{AppErrors, Error},
  rt::{
    signals, ServiceHandler, StartState, StateReporter, StopState, SvcEvt,
    SvcEvtReader
  }
};

#[cfg(unix)]
use crate::rt::signals::SigType;

/// Internal "main()" routine for server applications that run plain old
/// non-`async` code.
pub(crate) fn sync_main(
  mut handler: Box<dyn ServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>,
  test_mode: bool
) -> Result<(), Error> {
  // Get rid of unused variable warning
  #[cfg(unix)]
  let _ = test_mode;

  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    // Use the broadcast receiver supplied by caller (it likely originates from
    // a service runtime integration).
    rx_svcevt
  } else {
    let (tx, rx) = broadcast::channel(16);

    #[cfg(unix)]
    signals::sync_sigmon(move |st| match st {
      SigType::Int => {
        if let Err(e) = tx.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::Shutdown event; {}", e);
        }
      }
      SigType::Term => {
        if let Err(e) = tx.send(SvcEvt::Terminate) {
          log::error!("Unable to send SvcEvt::Terminate event; {}", e);
        }
      }
      SigType::Hup => {
        if let Err(e) = tx.send(SvcEvt::ReloadConf) {
          log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
        }
      }
    })?;

    // On Windows, if rx_svcevt is None, means we're not running under the
    // service subsystem (i.e. we're running as a foreground process), so
    // register a Ctrl+C handler.
    #[cfg(windows)]
    signals::sync_kill_to_event(tx, test_mode)?;

    rx
  };

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call server application's init() method, passing along a startup state
  // reporter object.
  let ss = StartState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let init_apperr = handler.init(ss).err();

  // If init() was successful, set the service's state to "started" and then
  // call the server application's run() method.
  let run_apperr = if init_apperr.is_none() {
    sr.started();
    handler.run(*set).err()
  } else {
    None
  };

  // Always send the first shutdown checkpoint here.  Either init() failed or
  // run retuned.  Either way, we're shutting down.
  sr.stopping(1, None);

  // Call the application's shutdown() function, passing along a shutdown state
  // reporter object.
  let ss = StopState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let term_apperr = handler.shutdown(ss).err();

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  // There can be multiple failures, and we don't want to lose information
  // about what went wrong, so return an error context that can contain all
  // callback errors.
  if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() {
    let apperrs = AppErrors {
      init: init_apperr,
      run: run_apperr,
      shutdown: term_apperr
    };
    Err(Error::SrvApp(apperrs))?;
  }

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/rttype/tokio.rs.




























































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::sync::{atomic::AtomicU32, Arc};

use tokio::{runtime, sync::broadcast, task};

use crate::{
  err::{AppErrors, Error},
  rt::{
    signals, StartState, StateReporter, StopState, SvcEvt, SvcEvtReader,
    TokioServiceHandler
  }
};

use killswitch::KillSwitch;


/// Internal "main()" routine for server applications that run the tokio
/// runtime for `async` code.
pub(crate) fn tokio_main(
  rtbldr: Option<runtime::Builder>,
  handler: Box<dyn TokioServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let rt = if let Some(mut bldr) = rtbldr {
    bldr.build()?
  } else {
    tokio::runtime::Runtime::new()?
  };
  rt.block_on(tokio_async_main(handler, sr, rx_svcevt))?;

  Ok(())
}

/// The `async` main function for tokio servers.
///
/// If `rx_svcevt` is `Some(_)` it means the channel was created elsewhere
/// (implied: The transmitting endpoint lives somewhere else).  If it is `None`
/// the channel needs to be created.
async fn tokio_async_main(
  mut handler: Box<dyn TokioServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let ks = KillSwitch::new();

  // If a SvcEvt receiver end-point was handed to us, then use it.  Otherwise
  // create our own and spawn the monitoring tasks that will generate events
  // for it.
  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    rx_svcevt
  } else {
    // Create channel used to signal events to application
    let (tx, rx) = broadcast::channel(16);

    let ks2 = ks.clone();

    // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event.
    let txc = tx.clone();
    task::spawn(signals::wait_shutdown(
      move || {
        if let Err(e) = txc.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
        }
      },
      ks2
    ));

    // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a
    // Terminate event.
    let txc = tx.clone();
    let ks2 = ks.clone();
    task::spawn(signals::wait_term(
      move || {
        if let Err(e) = txc.send(SvcEvt::Terminate) {
          log::error!("Unable to send SvcEvt::Terminate event; {}", e);
        }
      },
      ks2
    ));

    // There doesn't seem to be anything equivalent to SIGHUP for Windows
    // (Services)
    #[cfg(unix)]
    {
      let ks2 = ks.clone();

      let txc = tx.clone();
      task::spawn(signals::wait_reload(
        move || {
          if let Err(e) = txc.send(SvcEvt::ReloadConf) {
            log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
          }
        },
        ks2
      ));
    }

    rx
  };

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call application's init() method.
  let ss = StartState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let init_apperr = handler.init(ss).await.err();

  let run_apperr = if init_apperr.is_none() {
    sr.started();
    handler.run(*set).await.err()
  } else {
    None
  };

  // Always send the first shutdown checkpoint here.  Either init() failed or
  // run retuned.  Either way, we're shutting down.
  sr.stopping(1, None);


  // Now that the main application has terminated kill off any remaining
  // auxiliary tasks (read: signal waiters)
  ks.trigger();

  if (ks.finalize().await).is_err() {
    log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet");
  }

  // Call the application's shutdown() function.
  let ss = StopState {
    sr: Arc::clone(&sr),
    cnt: Arc::new(AtomicU32::new(2)) // 1 is used by the runtime, so start at 2
  };
  let term_apperr = handler.shutdown(ss).await.err();

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  // There can be multiple failures, and we don't want to lose information
  // about what went wrong, so return an error context that can contain all
  // callback errors.
  if init_apperr.is_some() || run_apperr.is_some() || term_apperr.is_some() {
    let apperrs = AppErrors {
      init: init_apperr,
      run: run_apperr,
      shutdown: term_apperr
    };
    Err(Error::SrvApp(apperrs))?;
  }

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/signals.rs.






















1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Signal monitoring.

#[cfg(unix)]
mod unix;

#[cfg(windows)]
mod win;

#[cfg(unix)]
pub use unix::{sync_sigmon, SigType};

#[cfg(all(unix, feature = "tokio"))]
pub use unix::{wait_reload, wait_shutdown, wait_term};

#[cfg(windows)]
pub use win::{wait_shutdown, wait_term};

#[cfg(windows)]
pub(crate) use win::sync_kill_to_event;

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/signals/unix.rs.








































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::thread;

use nix::sys::signal::{SigSet, SigmaskHow, Signal};

#[cfg(feature = "tokio")]
use tokio::signal::unix::{signal, SignalKind};

#[cfg(feature = "tokio")]
use killswitch::KillSwitch;

use crate::err::Error;


/// Async task used to wait for SIGINT/SIGTERM.
///
/// Whenever a SIGINT or SIGTERM is signalled the closure in `f` is called and
/// the task is terminated.
#[cfg(feature = "tokio")]
pub async fn wait_shutdown<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("SIGINT task launched");

  let Ok(mut sigint) = signal(SignalKind::interrupt()) else {
    log::error!("Unable to create SIGINT Future");
    return;
  };

  // Wait for SIGINT.
  tokio::select! {
    _ = sigint.recv() => {
      tracing::debug!("Received SIGINT -- running closure");
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_shutdown() terminating");
}

#[cfg(feature = "tokio")]
pub async fn wait_term<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("SIGTERM task launched");

  let Ok(mut sigterm) = signal(SignalKind::terminate()) else {
    log::error!("Unable to create SIGTERM Future");
    return;
  };

  // Wait for either SIGTERM.
  tokio::select! {
    _ = sigterm.recv() => {
      tracing::debug!("Received SIGTERM -- running closure");
      f();
    }
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_term() terminating");
}

/// Async task used to wait for SIGHUP
///
/// Whenever a SIGHUP is signalled the closure in `f` is called.
#[cfg(feature = "tokio")]
pub async fn wait_reload<F>(f: F, ks: KillSwitch)
where
  F: Fn()
{
  tracing::trace!("SIGHUP task launched");

  let Ok(mut sighup) = signal(SignalKind::hangup()) else {
    log::error!("Unable to create SIGHUP Future");
    return;
  };
  loop {
    tokio::select! {
      _ = sighup.recv() => {
        tracing::debug!("Received SIGHUP");
        f();
      },
      _ = ks.wait() => {
        tracing::debug!("killswitch triggered");
        break;
      }
    }
  }

  tracing::trace!("wait_reload() terminating");
}


pub enum SigType {
  Int,
  Term,
  Hup
}

pub fn sync_sigmon<F>(f: F) -> Result<thread::JoinHandle<()>, Error>
where
  F: Fn(SigType) + Send + 'static
{
  //
  // Block signals-of-interest on main thread.
  //
  let mut ss = SigSet::empty();
  ss.add(Signal::SIGINT);
  ss.add(Signal::SIGTERM);
  ss.add(Signal::SIGHUP);

  let mut oldset = SigSet::empty();
  nix::sys::signal::pthread_sigmask(
    SigmaskHow::SIG_SETMASK,
    Some(&ss),
    Some(&mut oldset)
  )
  .unwrap();

  let jh = thread::Builder::new()
    .name("sigmon".into())
    .spawn(move || {
      // Note: Don't need to unblock signals in this thread, because sigwait()
      // does it implicitly.
      let mask = unsafe {
        let mut mask: libc::sigset_t = std::mem::zeroed();
        libc::sigemptyset(&mut mask);
        libc::sigaddset(&mut mask, libc::SIGINT);
        libc::sigaddset(&mut mask, libc::SIGTERM);
        libc::sigaddset(&mut mask, libc::SIGHUP);
        mask
      };

      loop {
        let mut sig: libc::c_int = 0;
        let ret = unsafe { libc::sigwait(&mask, &mut sig) };
        if ret == 0 {
          let signal = Signal::try_from(sig).unwrap();
          match signal {
            Signal::SIGINT => {
              f(SigType::Int);
              break;
            }
            Signal::SIGTERM => {
              f(SigType::Term);
              break;
            }
            Signal::SIGHUP => {
              f(SigType::Hup);
            }
            _ => {}
          }
        }
      }
    })?;

  Ok(jh)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/signals/win.rs.

































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::sync::OnceLock;

use tokio::{signal, sync::broadcast};

use windows_sys::Win32::{
  Foundation::{BOOL, FALSE, TRUE},
  System::Console::{
    SetConsoleCtrlHandler, CTRL_BREAK_EVENT, CTRL_CLOSE_EVENT, CTRL_C_EVENT
  }
};

use killswitch::KillSwitch;

use crate::{err::Error, rt::SvcEvt};


static CELL: OnceLock<Box<dyn Fn(u32) -> BOOL + Send + Sync>> =
  OnceLock::new();

/// Async task used to wait for Ctrl+C to be signalled.
///
/// Whenever a Ctrl+C is signalled the closure in `f` is called and
/// the task is terminated.
pub async fn wait_shutdown<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("CTRL+C task launched");

  tokio::select! {
    _ = signal::ctrl_c() => {
      tracing::debug!("Received Ctrl+C");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_shutdown() terminating");
}

pub async fn wait_term<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("CTRL+Break/Close task launched");

  let Ok(mut cbreak) = signal::windows::ctrl_break() else {
    log::error!("Unable to create Ctrl+Break monitor");
    return;
  };

  let Ok(mut cclose) = signal::windows::ctrl_close() else {
    log::error!("Unable to create Close monitor");
    return;
  };

  tokio::select! {
    _ = cbreak.recv() => {
      tracing::debug!("Received Ctrl+Break");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = cclose.recv() => {
      tracing::debug!("Received Close");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_term() terminating");
}


pub(crate) fn sync_kill_to_event(
  tx: broadcast::Sender<SvcEvt>,
  test_mode: bool
) -> Result<(), Error> {
  setup_sync_fg_kill_handler(
    move |ty| {
      match ty {
        CTRL_C_EVENT => {
          tracing::trace!(
            "Received some kind of event that should trigger a shutdown."
          );
          if tx.send(SvcEvt::Shutdown).is_ok() {
            // We handled this event
            TRUE
          } else {
            FALSE
          }
        }
        CTRL_BREAK_EVENT | CTRL_CLOSE_EVENT => {
          tracing::trace!(
            "Received some kind of event that should trigger a termination."
          );
          if tx.send(SvcEvt::Terminate).is_ok() {
            // We handled this event
            TRUE
          } else {
            FALSE
          }
        }
        _ => FALSE
      }
    },
    test_mode
  )?;
  Ok(())
}


pub(crate) fn setup_sync_fg_kill_handler<F>(
  f: F,
  test_mode: bool
) -> Result<(), Error>
where
  F: Fn(u32) -> BOOL + Send + Sync + 'static
{
  // The proper way to do this is to use CELL.set(), because this can only
  // be done once for each process.  Tests may run in the same process (on
  // separate threads), which will cause globals like this to initialized
  // multiple times (which is bad).
  //
  // The workaround is to use get_or_init() instead, but only do it if test
  // mode has been requested.
  if test_mode {
    let _ = CELL.get_or_init(|| Box::new(f));
  } else {
    CELL
      .set(Box::new(f))
      .map_err(|_| Error::internal("Unable to set shared OnceLock cell"))?;
  }

  let rc = unsafe { SetConsoleCtrlHandler(Some(ctrlhandler), 1) };
  // Returns non-zero on success
  (rc != 0)
    .then_some(())
    .ok_or(Error::internal("SetConsoleCtrlHandler failed"))?;

  Ok(())
}

unsafe extern "system" fn ctrlhandler(ty: u32) -> BOOL {
  let Some(f) = CELL.get() else {
    return FALSE;
  };

  f(ty)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/systemd.rs.

























































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! systemd service module.
//!
//! Implements systemd-specific service subsystem interactions.

use sd_notify::NotifyState;

use super::StateMsg;

/// A service reporter that sends notifications to systemd.
pub struct ServiceReporter {}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, checkpoint: u32, status: Option<StateMsg>) {
    let text = if let Some(msg) = status {
      format!("Starting[{}] {}", checkpoint, msg.as_ref())
    } else {
      format!("Startup checkpoint {}", checkpoint)
    };

    if let Err(e) = sd_notify::notify(false, &[NotifyState::Status(&text)]) {
      log::error!("Unable to report service started state; {}", e);
    }
  }

  fn started(&self) {
    if let Err(e) = sd_notify::notify(false, &[NotifyState::Ready]) {
      log::error!("Unable to report service started state; {}", e);
    }
  }

  fn stopping(&self, checkpoint: u32, status: Option<StateMsg>) {
    // For systemd, the notification is merely that stopping has been
    // initiated.
    // ToDo: First checkpoint is 1?
    if checkpoint == 0 {
      if let Err(e) = sd_notify::notify(false, &[NotifyState::Stopping]) {
        log::error!("Unable to report service started state; {}", e);
      }
    }

    let text = if let Some(msg) = status {
      format!("Stopping[{}] {}", checkpoint, msg.as_ref())
    } else {
      format!("Stopping checkpoint {}", checkpoint)
    };

    // ToDo: Is it okay to set status after "Stopping" has been set?
    if let Err(e) = sd_notify::notify(false, &[NotifyState::Status(&text)]) {
      log::error!("Unable to report service started state; {}", e);
    }
  }

  fn stopped(&self) {}
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added src/rt/winsvc.rs.


































































































































































































































































































































































































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
//! Windows service module.

use std::{
  ffi::OsString,
  sync::{Arc, OnceLock},
  thread,
  time::Duration
};

use parking_lot::Mutex;

use tokio::sync::{
  broadcast,
  mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},
  oneshot
};

use windows_service::{
  define_windows_service,
  service::{
    ServiceControl, ServiceControlAccept, ServiceExitCode, ServiceState,
    ServiceStatus, ServiceType
  },
  service_control_handler::{
    self, ServiceControlHandlerResult, ServiceStatusHandle
  },
  service_dispatcher
};

use winreg::{enums::*, RegKey};

#[cfg(feature = "wait-for-debugger")]
use dbgtools_win::debugger;


use crate::{
  err::Error,
  lumberjack::LumberJack,
  rt::{SrvAppRt, SvcEvt}
};

use super::StateMsg;


const SERVICE_TYPE: ServiceType = ServiceType::OWN_PROCESS;
//const SERVICE_STARTPENDING_TIME: Duration = Duration::from_secs(10);
const SERVICE_STARTPENDING_TIME: Duration = Duration::from_secs(300);
const SERVICE_STOPPENDING_TIME: Duration = Duration::from_secs(30);


/// Messages that are sent to the service subsystem thread from the
/// application.
enum ToSvcMsg {
  Starting(u32),
  Started,
  Stopping(u32),
  Stopped
}

/// Buffer passed from main thread to service subsystem thread via global
/// `OnceLock`.
pub(crate) struct Xfer {
  svcname: String,

  /// Used to send handhake message from the service handler.
  tx_fromsvc: oneshot::Sender<Result<HandshakeMsg, Error>>
}

/// Used as a "bridge" send information to service thread.
static CELL: OnceLock<Mutex<Option<Xfer>>> = OnceLock::new();


/// Buffer passed back to the application thread from the service subsystem
/// thread.
struct HandshakeMsg {
  /// Channel end-point used to send messages to the service subsystem.
  tx: UnboundedSender<ToSvcMsg>,

  /// Channel end-point used to receive messages from the service subsystem.
  rx: broadcast::Receiver<SvcEvt>
}


/// A service reporter that forwards application state information to the
/// windows service subsystem.
pub struct ServiceReporter {
  tx: UnboundedSender<ToSvcMsg>
}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, checkpoint: u32, status: Option<StateMsg>) {
    if let Err(e) = self.tx.send(ToSvcMsg::Starting(checkpoint)) {
      log::error!("Unable to send Starting message; {}", e);
    }
    if let Some(status) = status {
      log::trace!("Starting[{}] {}", checkpoint, status.as_ref());
    }
  }

  fn started(&self) {
    if let Err(e) = self.tx.send(ToSvcMsg::Started) {
      log::error!("Unable to send Started message; {}", e);
    }
    log::trace!("Started");
  }

  fn stopping(&self, checkpoint: u32, status: Option<StateMsg>) {
    if let Err(e) = self.tx.send(ToSvcMsg::Stopping(checkpoint)) {
      log::error!("Unable to send Stopping message; {}", e);
    }
    if let Some(status) = status {
      log::trace!("Starting[{}] {}", checkpoint, status.as_ref());
    }
  }

  fn stopped(&self) {
    if let Err(e) = self.tx.send(ToSvcMsg::Stopped) {
      log::error!("Unable to send Stopped message; {}", e);
    }
    log::trace!("Stopped");
  }
}


pub fn run(svcname: &str, st: SrvAppRt) -> Result<(), Error> {
  #[cfg(feature = "wait-for-debugger")]
  {
    debugger::wait_for_then_break();
    debugger::output("Hello, debugger");
  }

  // Create a one-shot channel used to receive a an initial handshake from the
  // service handler thread.
  let (tx_fromsvc, rx_fromsvc) = oneshot::channel();

  // Create a buffer that will be used to transfer data to the service
  // subsystem's callback function.
  let xfer = Xfer {
    svcname: svcname.into(),
    tx_fromsvc
  };

  // Store Xfer buffer in the shared state (so the service handler thread can
  // take it out).
  // This must be done _before_ launching the application runtime thread below.
  CELL.get_or_init(|| Mutex::new(Some(xfer)));

  // Launch main application thread.
  //
  // The server application must be run on its own thread because the service
  // dispatcher call below will block the thread.
  let jh = thread::Builder::new()
    .name("svcapp".into())
    .spawn(move || srvapp_thread(st, rx_fromsvc))?;

  // Register generated `ffi_service_main` with the system and start the
  // service, blocking this thread until the service is stopped.
  service_dispatcher::start(svcname, ffi_service_main)?;

  match jh.join() {
    Ok(_) => Ok(()),
    Err(e) => *e
      .downcast::<Result<(), Error>>()
      .expect("Unable to downcast error from svcapp thread")
  }
}

fn srvapp_thread(
  st: SrvAppRt,
  rx_fromsvc: oneshot::Receiver<Result<HandshakeMsg, Error>>
) -> Result<(), Error> {
  // Wait for the service subsystem to report that it has initialized.
  // It passes along a channel end-point that can be used to send events to
  // the service manager.

  let Ok(res) = rx_fromsvc.blocking_recv() else {
    panic!("Unable to receive handshake");
  };

  let Ok(HandshakeMsg { tx, rx }) = res else {
    panic!("Unable to receive handshake");
  };

  let reporter = Arc::new(ServiceReporter { tx: tx.clone() });

  match st {
    SrvAppRt::Sync(handler) => {
      // Don't support test mode when running as a windows service
      crate::rt::rttype::sync_main(handler, reporter, Some(rx), false)
    }
    #[cfg(feature = "tokio")]
    SrvAppRt::Tokio(rtbldr, handler) => {
      crate::rt::rttype::tokio_main(rtbldr, handler, reporter, Some(rx))
    }
    #[cfg(feature = "rocket")]
    SrvAppRt::Rocket(handler) => {
      crate::rt::rttype::rocket_main(handler, reporter, Some(rx))
    }
  }
}


// Generate the windows service boilerplate.  The boilerplate contains the
// low-level service entry function (ffi_service_main) that parses incoming
// service arguments into Vec<OsString> and passes them to user defined service
// entry (my_service_main).
define_windows_service!(ffi_service_main, my_service_main);

fn take_shared_buffer() -> Xfer {
  let Some(x) = CELL.get() else {
    panic!("Unable to get shared buffer");
  };
  x.lock().take().unwrap()
}

/// The `Ok()` return value from [`svcinit()`].
struct InitRes {
  /// Value returned to the server application thread.
  handshake_reply: HandshakeMsg,

  rx_tosvc: UnboundedReceiver<ToSvcMsg>,

  status_handle: ServiceStatusHandle
}

fn my_service_main(_arguments: Vec<OsString>) {
  // Start by pulling out the service name and the channel sender.
  let Xfer {
    svcname,
    tx_fromsvc
  } = take_shared_buffer();

  match svcinit(&svcname) {
    Ok(InitRes {
      handshake_reply,
      rx_tosvc,
      status_handle
    }) => {
      // If svcinit() returned Ok(), it should have initialized logging.

      // Return Ok() to main server app thread so it will kick off the main
      // server application.
      if tx_fromsvc.send(Ok(handshake_reply)).is_err() {
        log::error!("Unable to send handshake message");
        return;
      }

      // Enter a loop that waits to receive a service termination event.
      if let Err(e) = svcloop(rx_tosvc, status_handle) {
        log::error!("The service loop failed; {}", e);
      }
    }
    Err(e) => {
      // If svcinit() returns Err() we don't actually know if logging has been
      // enabled yet -- but we can't do much other than hope that it is and try
      // to output an error log.
      // ToDo: If dbgtools-win is used, then we should output to the debugger.
      if tx_fromsvc.send(Err(e)).is_err() {
        log::error!("Unable to send handshake message");
      }
    }
  }
}


fn svcinit(svcname: &str) -> Result<InitRes, Error> {
  // Set up logging *before* telling sending SvcRunning to caller
  // ToDo: Respect request not to initialize logging
  LumberJack::from_winsvc(svcname)?.init()?;


  // If the service has a WorkDir configured under it's Parameters subkey, then
  // retreive it and attempt to change directory to it.
  // This must be done _before_ sending the HandskageMsg back to the service
  // main thread.
  // ToDo: Need proper error handling:
  //       - If the Paramters subkey can not be loaded, do we abort?
  //       - If the cwd can not be changed to the WorkDir we should abort.
  if let Ok(svcparams) = get_service_params_subkey(svcname) {
    if let Ok(wd) = svcparams.get_value::<String, &str>("WorkDir") {
      std::env::set_current_dir(wd).map_err(|e| {
        Error::internal(format!("Unable to switch to WorkDir; {}", e))
      })?;
    }
  }

  // Create channel that will be used to receive messages from the application.
  let (tx_tosvc, rx_tosvc) = unbounded_channel();

  // Create channel that will be used to send messages to the application.
  let (tx_svcevt, rx_svcevt) = broadcast::channel(16);

  //
  // Define system service event handler that will be receiving service events.
  //
  let event_handler = move |control_event| -> ServiceControlHandlerResult {
    match control_event {
      ServiceControl::Interrogate => {
        log::debug!("svc signal recieved: interrogate");
        // Notifies a service to report its current status information to the
        // service control manager.  Always return NoError even if not
        // implemented.
        ServiceControlHandlerResult::NoError
      }
      ServiceControl::Stop => {
        log::debug!("svc signal recieved: stop");

        // Message application that it's time to shutdown
        if let Err(e) = tx_svcevt.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::Shutdown from winsvc; {}", e);
        }

        ServiceControlHandlerResult::NoError
      }
      ServiceControl::Continue => {
        log::debug!("svc signal recieved: continue");
        ServiceControlHandlerResult::NotImplemented
      }
      ServiceControl::Pause => {
        log::debug!("svc signal recieved: pause");
        ServiceControlHandlerResult::NotImplemented
      }
      _ => {
        log::debug!("svc signal recieved: other");
        ServiceControlHandlerResult::NotImplemented
      }
    }
  };


  let status_handle =
    service_control_handler::register(svcname, event_handler)?;

  if let Err(e) = status_handle.set_service_status(ServiceStatus {
    service_type: SERVICE_TYPE,
    current_state: ServiceState::StartPending,
    controls_accepted: ServiceControlAccept::empty(),
    exit_code: ServiceExitCode::Win32(0),
    checkpoint: 0,
    wait_hint: SERVICE_STARTPENDING_TIME,
    process_id: None
  }) {
    log::error!(
      "Unable to set the sevice status to 'start pending 0'; {}",
      e
    );
    Err(e)?;
  }

  Ok(InitRes {
    handshake_reply: HandshakeMsg {
      tx: tx_tosvc,
      rx: rx_svcevt
    },
    rx_tosvc,
    status_handle
  })
}

fn svcloop(
  mut rx_tosvc: UnboundedReceiver<ToSvcMsg>,
  status_handle: ServiceStatusHandle
) -> Result<(), Error> {
  //
  // Enter loop that waits for application state changes that should be
  // reported to the service subsystem.
  // Once the application reports that it has stopped, then break out of the
  // loop.
  //
  tracing::trace!("enter app state monitoring loop");
  loop {
    match rx_tosvc.blocking_recv() {
      Some(ev) => {
        match ev {
          ToSvcMsg::Starting(checkpoint) => {
            log::debug!("app reported that it is running");
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::StartPending,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint,
              wait_hint: SERVICE_STARTPENDING_TIME,
              process_id: None
            }) {
              log::error!(
                "Unable to set service status to 'start pending {}'; {}",
                checkpoint,
                e
              );
            }
          }
          ToSvcMsg::Started => {
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::Running,
              controls_accepted: ServiceControlAccept::STOP,
              exit_code: ServiceExitCode::Win32(0),
              checkpoint: 0,
              wait_hint: Duration::default(),
              process_id: None
            }) {
              log::error!("Unable to set service status to 'started'; {}", e);
            }
          }
          ToSvcMsg::Stopping(checkpoint) => {
            log::debug!("app is shutting down");
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::StopPending,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint,
              wait_hint: SERVICE_STOPPENDING_TIME,
              process_id: None
            }) {
              log::error!(
                "Unable to set service status to 'stop pending {}'; {}",
                checkpoint,
                e
              );
            }
          }
          ToSvcMsg::Stopped => {
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::Stopped,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint: 0,
              wait_hint: Duration::default(),
              process_id: None
            }) {
              log::error!("Unable to set service status to 'stopped'; {}", e);
            }

            // Break out of loop to terminate service subsystem
            break;
          }
        }
      }
      None => {
        // All the sender halves have been deallocated
        log::error!("Sender endpoints unexpectedly disappeared");
        break;
      }
    }
  }

  tracing::trace!("service terminated");

  Ok(())
}


const SVCPATH: &str = "SYSTEM\\CurrentControlSet\\Services";
const PARAMS: &str = "Parameters";


pub fn read_service_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let subkey = services.open_subkey(service_name)?;
  Ok(subkey)
}

pub fn write_service_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let subkey =
    services.open_subkey_with_flags(service_name, winreg::enums::KEY_WRITE)?;
  Ok(subkey)
}

/// Create a Parameters subkey for a service.
pub fn create_service_params(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let (subkey, _disp) = asrv.create_subkey(PARAMS)?;

  Ok(subkey)
}

/// Create a Parameters subkey for a service.
pub fn get_service_params_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let subkey = asrv.open_subkey(PARAMS)?;

  Ok(subkey)
}

/// Load a service Parameter from the registry.
pub fn get_service_param(service_name: &str) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let params = asrv.open_subkey(PARAMS)?;

  Ok(params)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/rttype.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20




















-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! Runtime types.
//!
//! This module is used to collect the various supported "runtime types" or
//! "run contexts".

#[cfg(feature = "rocket")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
mod rocket;

mod sync;
mod tokio;

#[cfg(feature = "rocket")]
#[cfg_attr(docsrs, doc(cfg(feature = "rocket")))]
pub(crate) use rocket::rocket_main;

pub(crate) use sync::sync_main;
pub(crate) use tokio::tokio_main;

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/rttype/rocket.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196




































































































































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! Rocket runtime module.
//!
//! While Rocket uses tokio, it [Rocket] insists on initializing tokio for
//! itself.  Attempting to use the `TokioServiceHandler` will cause `Rocket`s
//! to issue a warning at startup.
//!
//! As a convenience _qsu_ can keep track of rockets and automatically shut
//! them down once the service subsystem requests a shutdown.  To use this
//! feature, the server application should return a `Vec<Rocket<Build>>` from
//! `RocketServiceHandler::init()`.  Any `Rocket` instance in this vec will be
//! ignited before being passed to `RocketServiceHandler::run()`.
//!
//! Server applications do not need to use this feature and should return an
//! empty vector from `init()` in this case.  This also requires the
//! application code to trigger a shutdown of each instance itself.

use std::sync::Arc;

use tokio::{sync::broadcast, task};

use killswitch::KillSwitch;

use crate::{
  err::Error, signals, RocketServiceHandler, StartState, StateReporter,
  StopState, SvcEvt, SvcEvtReader
};


pub(crate) fn rocket_main(
  handler: Box<dyn RocketServiceHandler + Send>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  rocket::execute(rocket_async_main(handler, sr, rx_svcevt))?;

  Ok(())
}

async fn rocket_async_main(
  mut handler: Box<dyn RocketServiceHandler + Send>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let ks = KillSwitch::new();

  // If a SvcEvt receiver end-point was handed to us, then use it.  Otherwise
  // create our own and spawn the monitoring tasks that will generate events
  // for it.
  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    rx_svcevt
  } else {
    // Create channel used to signal events to application
    let (tx, rx) = broadcast::channel(16);

    let ks2 = ks.clone();

    // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event.
    let txc = tx.clone();
    task::spawn(signals::wait_shutdown(
      move || {
        if let Err(e) = txc.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::Shutdown event; {}", e);
        }
      },
      ks2
    ));

    // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a
    // Terminate event.
    let txc = tx.clone();
    let ks2 = ks.clone();
    task::spawn(signals::wait_term(
      move || {
        if let Err(e) = txc.send(SvcEvt::Terminate) {
          log::error!("Unable to send SvcEvt::Terminate event; {}", e);
        }
      },
      ks2
    ));

    // There doesn't seem to be anything equivalent to SIGHUP for Windows
    // (Services)
    #[cfg(unix)]
    {
      let ks2 = ks.clone();

      let txc = tx.clone();
      task::spawn(signals::wait_reload(
        move || {
          if let Err(e) = txc.send(SvcEvt::ReloadConf) {
            log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
          }
        },
        ks2
      ));
    }

    rx
  };

  let mut rx_svcevt2 = rx_svcevt.resubscribe();

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call application's init() method.
  let ss = StartState {
    sr: Arc::clone(&sr)
  };
  let rockets = handler.init(ss).await?;

  // Set the service's state to "started"
  sr.started();

  // Ignite rockets so we can get Shutdown contexts for each of the instances
  // (so we can tell them to
  let mut ignited = vec![];
  let mut rocket_shutdowns = vec![];
  for rocket in rockets {
    let rocket = rocket.ignite().await?;
    rocket_shutdowns.push(rocket.shutdown());
    ignited.push(rocket);
  }

  // Launch a task that waits for the SvtEvt::Shutdown event.   Once it
  // arrives, tell all rocket instances to gracefully shutdown.
  //
  // Note: We don't want to use the killswitch for this because the killswitch
  // isn't triggered until run() has returned, and we might want the graceful
  // shutdown to cause the graceful shutdowns.
  let jh_graceful_landing = task::spawn(async move {
    loop {
      match rx_svcevt2.recv().await {
        Ok(SvcEvt::Shutdown) => {
          tracing::trace!("Ask rocket instances to shut down gracefully");
          for shutdown in rocket_shutdowns {
            // Tell this rocket instance to shut down gracefully.
            shutdown.notify();
          }
          break;
        }
        Ok(SvcEvt::Terminate) => {
          tracing::trace!("Ask rocket instances to shut down gracefully");
          for shutdown in rocket_shutdowns {
            // Tell this rocket instance to shut down gracefully.
            shutdown.notify();
          }
          break;
        }
        Ok(_) => {
          tracing::trace!("Ignored message in wask waiting for shutdown");
          continue;
        }
        Err(e) => {
          log::error!("Unable to receive broadcast SvcEvt message, {}", e);
          break;
        }
      }
    }
  });

  // Call the application's main application function.
  if let Err(e) = handler.run(ignited, *set).await {
    log::error!("Service application returned error; {}", e);
  }

  // Now that the main application has terminated kill off any remaining
  // auxiliary tasks (read: signal waiters)
  ks.trigger();

  // .. and wait for
  if let Err(e) = jh_graceful_landing.await {
    log::warn!(
      "An error was returned from the graceful landing task; {}",
      e
    );
  }

  if let Err(_) = ks.finalize().await {
    log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet");
  }

  // Call the application's shutdown() function.
  let ss = StopState {
    sr: Arc::clone(&sr)
  };
  if let Err(e) = handler.shutdown(ss).await {
    log::error!("Service shutdown handler returned error; {}", e);
  }

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/rttype/sync.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143















































































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
use std::sync::Arc;

#[cfg(unix)]
use std::thread;

use tokio::sync::broadcast;

#[cfg(unix)]
use nix::sys::signal::{SigSet, SigmaskHow, Signal};

use crate::{
  err::Error, ServiceHandler, StartState, StateReporter, StopState, SvcEvt,
  SvcEvtReader
};

// ToDo: Set up a signal handling so we can catch SIGINT, SIGTERM and SIGHUP in
// sync/blocking land as well.
pub(crate) fn sync_main(
  mut handler: Box<dyn ServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    rx_svcevt
  } else {
    let (tx, rx) = broadcast::channel(16);

    #[cfg(unix)]
    init_signals(tx)?;

    // On Windows, if rx_svcevt is None, means we're not running under the
    // service subsystem (i.e. we're running as a foreground process), so
    // register a Ctrl+C handler.
    #[cfg(windows)]
    crate::signals::sync_kill_to_event(tx)?;

    rx
  };

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call application's init() method.
  let ss = StartState {
    sr: Arc::clone(&sr)
  };
  handler.init(ss)?;

  // Set the service's state to "started"
  sr.started();

  // Call the application's main application function.
  if let Err(e) = handler.run(*set) {
    log::error!("Service application returned error; {}", e);
  }

  // Call the application's shutdown() function.
  let ss = StopState {
    sr: Arc::clone(&sr)
  };
  if let Err(e) = handler.shutdown(ss) {
    log::error!("Service shutdown handler returned error; {}", e);
  }

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  Ok(())
}


#[cfg(unix)]
/// Set up signal management.
///
/// Block SIGINT, SIGTERM and SIGHUP then launch a thread to catch them and
/// turn them into messages instead.
///
/// This function must be called on the main thread.
fn init_signals(
  tx_svcevt: broadcast::Sender<SvcEvt>
) -> Result<thread::JoinHandle<()>, Error> {
  //
  // Block signals-of-interest on main thread.
  //
  let mut ss = SigSet::empty();
  ss.add(Signal::SIGINT);
  ss.add(Signal::SIGTERM);
  ss.add(Signal::SIGHUP);

  let mut oldset = SigSet::empty();
  nix::sys::signal::pthread_sigmask(
    SigmaskHow::SIG_SETMASK,
    Some(&ss),
    Some(&mut oldset)
  )
  .unwrap();

  let jh = thread::Builder::new()
    .name("sigmon".into())
    .spawn(move || {
      // Note: Don't need to unblock signals in this thread, because sigwait()
      // does it implicitly.
      let mask = unsafe {
        let mut mask: libc::sigset_t = std::mem::zeroed();
        libc::sigemptyset(&mut mask);
        libc::sigaddset(&mut mask, libc::SIGINT);
        libc::sigaddset(&mut mask, libc::SIGTERM);
        libc::sigaddset(&mut mask, libc::SIGHUP);
        mask
      };

      loop {
        let mut sig: libc::c_int = 0;
        let ret = unsafe { libc::sigwait(&mask, &mut sig) };
        if ret == 0 {
          let signal = Signal::try_from(sig).unwrap();
          match signal {
            Signal::SIGINT => {
              if let Err(e) = tx_svcevt.send(SvcEvt::Shutdown) {
                log::error!("Unable to send SvcEvt::Shutdown event; {}", e);
              }
              break;
            }
            Signal::SIGTERM => {
              if let Err(e) = tx_svcevt.send(SvcEvt::Terminate) {
                log::error!("Unable to send SvcEvt::Terminate event; {}", e);
              }
              break;
            }
            Signal::SIGHUP => {
              if let Err(e) = tx_svcevt.send(SvcEvt::ReloadConf) {
                log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
              }
            }
            _ => {}
          }
        }
      }
    })?;

  Ok(jh)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/rttype/tokio.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133





































































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
use std::sync::Arc;

use tokio::{runtime, sync::broadcast, task};

use crate::{
  err::Error, signals, StartState, StateReporter, StopState, SvcEvt,
  SvcEvtReader, TokioServiceHandler
};

use killswitch::KillSwitch;

pub(crate) fn tokio_main(
  rtbldr: Option<runtime::Builder>,
  handler: Box<dyn TokioServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let rt = if let Some(mut bldr) = rtbldr {
    bldr.build()?
  } else {
    tokio::runtime::Runtime::new()?
  };
  rt.block_on(tokio_async_main(handler, sr, rx_svcevt))?;

  Ok(())
}

/// The `async` main function for tokio servers.
///
/// If `rx_svcevt` is `Some(_)` it means the channel was created elsewhere
/// (implied: The transmitting endpoint lives somewhere else).  If it is `None`
/// the channel needs to be created.
async fn tokio_async_main(
  mut handler: Box<dyn TokioServiceHandler>,
  sr: Arc<dyn StateReporter + Send + Sync>,
  rx_svcevt: Option<broadcast::Receiver<SvcEvt>>
) -> Result<(), Error> {
  let ks = KillSwitch::new();

  // If a SvcEvt receiver end-point was handed to us, then use it.  Otherwise
  // create our own and spawn the monitoring tasks that will generate events
  // for it.
  let rx_svcevt = if let Some(rx_svcevt) = rx_svcevt {
    rx_svcevt
  } else {
    // Create channel used to signal events to application
    let (tx, rx) = broadcast::channel(16);

    let ks2 = ks.clone();

    // SIGINT (on unix) and Ctrl+C on Windows should trigger a Shutdown event.
    let txc = tx.clone();
    task::spawn(signals::wait_shutdown(
      move || {
        if let Err(e) = txc.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
        }
      },
      ks2
    ));

    // SIGTERM (on unix) and Ctrl+Break/Close on Windows should trigger a
    // Terminate event.
    let txc = tx.clone();
    let ks2 = ks.clone();
    task::spawn(signals::wait_term(
      move || {
        if let Err(e) = txc.send(SvcEvt::Terminate) {
          log::error!("Unable to send SvcEvt::Terminate event; {}", e);
        }
      },
      ks2
    ));

    // There doesn't seem to be anything equivalent to SIGHUP for Windows
    // (Services)
    #[cfg(unix)]
    {
      let ks2 = ks.clone();

      let txc = tx.clone();
      task::spawn(signals::wait_reload(
        move || {
          if let Err(e) = txc.send(SvcEvt::ReloadConf) {
            log::error!("Unable to send SvcEvt::ReloadConf event; {}", e);
          }
        },
        ks2
      ));
    }

    rx
  };

  let set = Box::new(SvcEvtReader { rx: rx_svcevt });

  // Call application's init() method.
  let ss = StartState {
    sr: Arc::clone(&sr)
  };
  handler.init(ss).await?;

  // Set the service's state to "started"
  sr.started();

  // Call the application's main application function.
  if let Err(e) = handler.run(*set).await {
    log::error!("Service application returned error; {}", e);
  }

  // Now that the main application has terminated kill off any remaining
  // auxiliary tasks (read: signal waiters)
  ks.trigger();

  if (ks.finalize().await).is_err() {
    log::warn!("Attempted to finalize KillSwitch that wasn't triggered yet");
  }

  // Call the application's shutdown() function.
  let ss = StopState {
    sr: Arc::clone(&sr)
  };
  if let Err(e) = handler.shutdown(ss).await {
    log::error!("Service shutdown handler returned error; {}", e);
  }

  // Inform the service subsystem that the the shutdown is complete
  sr.stopped();

  Ok(())
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/signals.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18


















-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! Signal monitoring.

#[cfg(unix)]
mod unix;

#[cfg(windows)]
mod win;

#[cfg(unix)]
pub use unix::{wait_reload, wait_shutdown, wait_term};

#[cfg(windows)]
pub use win::{wait_shutdown, wait_term};

#[cfg(windows)]
pub(crate) use win::sync_kill_to_event;

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/signals/unix.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
























































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
use tokio::signal::unix::{signal, SignalKind};

use killswitch::KillSwitch;

/// Async task used to wait for SIGINT/SIGTERM.
///
/// Whenever a SIGINT or SIGTERM is signalled the closure in `f` is called and
/// the task is terminated.
pub async fn wait_shutdown<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("SIGINT task launched");

  let Ok(mut sigint) = signal(SignalKind::interrupt()) else {
    log::error!("Unable to create SIGINT Future");
    return;
  };

  // Wait for SIGINT.
  tokio::select! {
    _ = sigint.recv() => {
      tracing::debug!("Received SIGINT -- running closure");
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_shutdown() terminating");
}

pub async fn wait_term<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("SIGTERM task launched");

  let Ok(mut sigterm) = signal(SignalKind::terminate()) else {
    log::error!("Unable to create SIGTERM Future");
    return;
  };

  // Wait for either SIGTERM.
  tokio::select! {
    _ = sigterm.recv() => {
      tracing::debug!("Received SIGTERM -- running closure");
      f();
    }
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_term() terminating");
}

/// Async task used to wait for SIGHUP
///
/// Whenever a SIGHUP is signalled the closure in `f` is called.
pub async fn wait_reload<F>(f: F, ks: KillSwitch)
where
  F: Fn()
{
  tracing::trace!("SIGHUP task launched");

  let Ok(mut sighup) = signal(SignalKind::hangup()) else {
    log::error!("Unable to create SIGHUP Future");
    return;
  };
  loop {
    tokio::select! {
      _ = sighup.recv() => {
        tracing::debug!("Received SIGHUP");
        f();
      },
      _ = ks.wait() => {
        tracing::debug!("killswitch triggered");
        break;
      }
    }
  }

  tracing::trace!("wait_reload() terminating");
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/signals/win.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141













































































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
use std::sync::OnceLock;

use tokio::{signal, sync::broadcast};

use windows_sys::Win32::{
  Foundation::{BOOL, FALSE, TRUE},
  System::Console::{
    SetConsoleCtrlHandler, CTRL_BREAK_EVENT, CTRL_CLOSE_EVENT, CTRL_C_EVENT
  }
};

use killswitch::KillSwitch;

use crate::{err::Error, SvcEvt};


static CELL: OnceLock<Box<dyn Fn(u32) -> BOOL + Send + Sync>> =
  OnceLock::new();

/// Async task used to wait for Ctrl+C to be signalled.
///
/// Whenever a Ctrl+C is signalled the closure in `f` is called and
/// the task is terminated.
pub async fn wait_shutdown<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("CTRL+C task launched");

  tokio::select! {
    _ = signal::ctrl_c() => {
      tracing::debug!("Received Ctrl+C");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_shutdown() terminating");
}

pub async fn wait_term<F>(f: F, ks: KillSwitch)
where
  F: FnOnce()
{
  tracing::trace!("CTRL+Break/Close task launched");

  let Ok(mut cbreak) = signal::windows::ctrl_break() else {
    log::error!("Unable to create Ctrl+Break monitor");
    return;
  };

  let Ok(mut cclose) = signal::windows::ctrl_close() else {
    log::error!("Unable to create Close monitor");
    return;
  };

  tokio::select! {
    _ = cbreak.recv() => {
      tracing::debug!("Received Ctrl+Break");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = cclose.recv() => {
      tracing::debug!("Received Close");
      // Once any process termination signal has been received post call the
      // callback.
      f();
    },
    _ = ks.wait() => {
      tracing::debug!("killswitch triggered");
    }
  }

  tracing::trace!("wait_term() terminating");
}


pub(crate) fn sync_kill_to_event(
  tx: broadcast::Sender<SvcEvt>
) -> Result<(), Error> {
  setup_sync_fg_kill_handler(move |ty| {
    match ty {
      CTRL_C_EVENT => {
        tracing::trace!(
          "Received some kind of event that should trigger a shutdown."
        );
        if tx.send(SvcEvt::Shutdown).is_ok() {
          // We handled this event
          TRUE
        } else {
          FALSE
        }
      }
      CTRL_BREAK_EVENT | CTRL_CLOSE_EVENT => {
        tracing::trace!(
          "Received some kind of event that should trigger a termination."
        );
        if tx.send(SvcEvt::Terminate).is_ok() {
          // We handled this event
          TRUE
        } else {
          FALSE
        }
      }
      _ => FALSE
    }
  })?;
  Ok(())
}


pub(crate) fn setup_sync_fg_kill_handler<F>(f: F) -> Result<(), Error>
where
  F: Fn(u32) -> BOOL + Send + Sync + 'static
{
  CELL
    .set(Box::new(f))
    .map_err(|_| Error::internal("Unable to set shared OnceLock cell"))?;

  let rc = unsafe { SetConsoleCtrlHandler(Some(ctrlhandler), 1) };
  (rc == 0)
    .then_some(())
    .ok_or(Error::internal("SetConsoleCtrlHandler failed"))?;

  Ok(())
}

unsafe extern "system" fn ctrlhandler(ty: u32) -> BOOL {
  let Some(f) = CELL.get() else {
    return FALSE;
  };

  f(ty)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/systemd.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34


































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! systemd service module.
//!
//! Implements systemd-specific service subsystem interactions.

use sd_notify::NotifyState;

pub struct ServiceReporter {}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, checkpoint: u32) {
    let text = format!("Startup checkpoint {}", checkpoint);
    if let Err(e) = sd_notify::notify(false, &[NotifyState::Status(&text)]) {
      log::error!("Unable to report service started state; {}", e);
    }
  }

  fn started(&self) {
    if let Err(e) = sd_notify::notify(false, &[NotifyState::Ready]) {
      log::error!("Unable to report service started state; {}", e);
    }
  }

  fn stopping(&self, checkpoint: u32) {
    if checkpoint == 0 {
      if let Err(e) = sd_notify::notify(false, &[NotifyState::Stopping]) {
        log::error!("Unable to report service started state; {}", e);
      }
    }
  }

  fn stopped(&self) {}
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Deleted src/winsvc.rs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493













































































































































































































































































































































































































































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
//! Windows service module.

use std::{
  ffi::OsString,
  sync::{Arc, OnceLock},
  thread,
  time::Duration
};

use parking_lot::Mutex;

use tokio::sync::{
  broadcast,
  mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},
  oneshot
};

use windows_service::{
  define_windows_service,
  service::{
    ServiceControl, ServiceControlAccept, ServiceExitCode, ServiceState,
    ServiceStatus, ServiceType
  },
  service_control_handler::{
    self, ServiceControlHandlerResult, ServiceStatusHandle
  },
  service_dispatcher
};

use winreg::{enums::*, RegKey};

#[cfg(feature = "wait-for-debugger")]
use dbgtools_win::debugger;


use crate::{err::Error, lumberjack::LumberJack, SvcEvt, SvcType};

const SERVICE_TYPE: ServiceType = ServiceType::OWN_PROCESS;
//const SERVICE_STARTPENDING_TIME: Duration = Duration::from_secs(10);
const SERVICE_STARTPENDING_TIME: Duration = Duration::from_secs(300);
const SERVICE_STOPPENDING_TIME: Duration = Duration::from_secs(30);


/// Messages that are sent to the service subsystem thread from the
/// application.
enum ToSvcMsg {
  Starting(u32),
  Started,
  Stopping(u32),
  Stopped
}

/// Buffer passed from main thread to service subsystem thread via global
/// `OnceLock`.
pub(crate) struct Xfer {
  svcname: String,

  /// Used to send handhake message from the service handler.
  tx_fromsvc: oneshot::Sender<Result<HandshakeMsg, Error>>
}

/// Used as a "bridge" send information to service thread.
static CELL: OnceLock<Mutex<Option<Xfer>>> = OnceLock::new();


/// Buffer passed back to the application thread from the service subsystem
/// thread.
struct HandshakeMsg {
  /// Channel end-point used to send messages to the service subsystem.
  tx: UnboundedSender<ToSvcMsg>,

  /// Channel end-point used to receive messages from the service subsystem.
  rx: broadcast::Receiver<SvcEvt>
}


pub struct ServiceReporter {
  tx: UnboundedSender<ToSvcMsg>
}

impl super::StateReporter for ServiceReporter {
  fn starting(&self, checkpoint: u32) {
    if let Err(e) = self.tx.send(ToSvcMsg::Starting(checkpoint)) {
      log::error!("Unable to send Starting message; {}", e);
    }
  }

  fn started(&self) {
    if let Err(e) = self.tx.send(ToSvcMsg::Started) {
      log::error!("Unable to send Started message; {}", e);
    }
  }

  fn stopping(&self, checkpoint: u32) {
    if let Err(e) = self.tx.send(ToSvcMsg::Stopping(checkpoint)) {
      log::error!("Unable to send Stopping message; {}", e);
    }
  }

  fn stopped(&self) {
    if let Err(e) = self.tx.send(ToSvcMsg::Stopped) {
      log::error!("Unable to send Stopped message; {}", e);
    }
  }
}


pub fn run(svcname: &str, st: SvcType) -> Result<(), Error> {
  #[cfg(feature = "wait-for-debugger")]
  {
    debugger::wait_for_then_break();
    debugger::output("Hello, debugger");
  }

  // Create a one-shot channel used to receive a an initial handshake from the
  // service handler thread.
  let (tx_fromsvc, rx_fromsvc) = oneshot::channel();

  // Create a buffer that will be used to transfer data to the service
  // subsystem's callback function.
  let xfer = Xfer {
    svcname: svcname.into(),
    tx_fromsvc
  };

  // Store Xfer buffer in the shared state (so the service handler thread can
  // take it out).
  // This must be done _before_ launching the application runtime thread below.
  CELL.get_or_init(|| Mutex::new(Some(xfer)));

  // Launch main application thread.
  //
  // The server application must be run on its own thread because the service
  // dispatcher call below will block the thread.
  let jh = thread::Builder::new()
    .name("svcapp".into())
    .spawn(move || srvapp_thread(st, rx_fromsvc))?;

  // Register generated `ffi_service_main` with the system and start the
  // service, blocking this thread until the service is stopped.
  service_dispatcher::start(svcname, ffi_service_main)?;

  match jh.join() {
    Ok(_) => Ok(()),
    Err(e) => *e
      .downcast::<Result<(), Error>>()
      .expect("Unable to downcast error from svcapp thread")
  }
}

fn srvapp_thread(
  st: SvcType,
  rx_fromsvc: oneshot::Receiver<Result<HandshakeMsg, Error>>
) -> Result<(), Error> {
  // Wait for the service subsystem to report that it has initialized.
  // It passes along a channel end-point that can be used to send events to
  // the service manager.

  let Ok(res) = rx_fromsvc.blocking_recv() else {
    panic!("Unable to receive handshake");
  };

  let Ok(HandshakeMsg { tx, rx }) = res else {
    panic!("Unable to receive handshake");
  };

  let reporter = Arc::new(ServiceReporter { tx: tx.clone() });

  match st {
    SvcType::Sync(handler) => {
      crate::rttype::sync_main(handler, reporter, Some(rx))
    }
    SvcType::Tokio(rtbldr, handler) => {
      crate::rttype::tokio_main(rtbldr, handler, reporter, Some(rx))
    }
    #[cfg(feature = "rocket")]
    SvcType::Rocket(handler) => {
      crate::rttype::rocket_main(handler, reporter, Some(rx))
    }
  }
}


// Generate the windows service boilerplate.  The boilerplate contains the
// low-level service entry function (ffi_service_main) that parses incoming
// service arguments into Vec<OsString> and passes them to user defined service
// entry (my_service_main).
define_windows_service!(ffi_service_main, my_service_main);

fn take_shared_buffer() -> Xfer {
  let Some(x) = CELL.get() else {
    panic!("Unable to get shared buffer");
  };
  x.lock().take().unwrap()
}

/// The `Ok()` return value from [`svcinit()`].
struct InitRes {
  /// Value returned to the server application thread.
  handshake_reply: HandshakeMsg,

  rx_tosvc: UnboundedReceiver<ToSvcMsg>,

  status_handle: ServiceStatusHandle
}

fn my_service_main(_arguments: Vec<OsString>) {
  // Start by pulling out the service name and the channel sender.
  let Xfer {
    svcname,
    tx_fromsvc
  } = take_shared_buffer();

  match svcinit(&svcname) {
    Ok(InitRes {
      handshake_reply,
      rx_tosvc,
      status_handle
    }) => {
      // If svcinit() returned Ok(), it should have initialized logging.

      // Return Ok() to main server app thread so it will kick off the main
      // server application.
      if tx_fromsvc.send(Ok(handshake_reply)).is_err() {
        log::error!("Unable to send handshake message");
        return;
      }

      // Enter a loop that waits to receive a service termination event.
      if let Err(e) = svcloop(rx_tosvc, status_handle) {
        log::error!("The service loop failed; {}", e);
      }
    }
    Err(e) => {
      // If svcinit() returns Err() we don't actually know if logging has been
      // enabled yet -- but we can't do much other than hope that it is and try
      // to output an error log.
      // ToDo: If dbgtools-win is used, then we should output to the debugger.
      if tx_fromsvc.send(Err(e)).is_err() {
        log::error!("Unable to send handshake message");
      }
    }
  }
}


fn svcinit(svcname: &str) -> Result<InitRes, Error> {
  // Set up logging *before* telling sending SvcRunning to caller
  LumberJack::from_winsvc(svcname)?.init()?;


  // If the service has a WorkDir configured under it's Parameters subkey, then
  // retreive it and attempt to change directory to it.
  // This must be done _before_ sending the HandskageMsg back to the service
  // main thread.
  // ToDo: Need proper error handling:
  //       - If the Paramters subkey can not be loaded, do we abort?
  //       - If the cwd can not be changed to the WorkDir we should abort.
  if let Ok(svcparams) = get_service_params_subkey(svcname) {
    if let Ok(wd) = svcparams.get_value::<String, &str>("WorkDir") {
      std::env::set_current_dir(wd).map_err(|e| {
        Error::internal(format!("Unable to switch to WorkDir; {}", e))
      })?;
    }
  }

  // Create channel that will be used to receive messages from the application.
  let (tx_tosvc, rx_tosvc) = unbounded_channel();

  // Create channel that will be used to send messages to the application.
  let (tx_svcevt, rx_svcevt) = broadcast::channel(16);

  //
  // Define system service event handler that will be receiving service events.
  //
  let event_handler = move |control_event| -> ServiceControlHandlerResult {
    match control_event {
      ServiceControl::Interrogate => {
        log::debug!("svc signal recieved: interrogate");
        // Notifies a service to report its current status information to the
        // service control manager.  Always return NoError even if not
        // implemented.
        ServiceControlHandlerResult::NoError
      }
      ServiceControl::Stop => {
        log::debug!("svc signal recieved: stop");

        // Message application that it's time to shutdown
        if let Err(e) = tx_svcevt.send(SvcEvt::Shutdown) {
          log::error!("Unable to send SvcEvt::Shutdown from winsvc; {}", e);
        }

        ServiceControlHandlerResult::NoError
      }
      ServiceControl::Continue => {
        log::debug!("svc signal recieved: continue");
        ServiceControlHandlerResult::NotImplemented
      }
      ServiceControl::Pause => {
        log::debug!("svc signal recieved: pause");
        ServiceControlHandlerResult::NotImplemented
      }
      _ => {
        log::debug!("svc signal recieved: other");
        ServiceControlHandlerResult::NotImplemented
      }
    }
  };


  let status_handle =
    service_control_handler::register(svcname, event_handler)?;

  if let Err(e) = status_handle.set_service_status(ServiceStatus {
    service_type: SERVICE_TYPE,
    current_state: ServiceState::StartPending,
    controls_accepted: ServiceControlAccept::empty(),
    exit_code: ServiceExitCode::Win32(0),
    checkpoint: 0,
    wait_hint: SERVICE_STARTPENDING_TIME,
    process_id: None
  }) {
    log::error!(
      "Unable to set the sevice status to 'start pending 0'; {}",
      e
    );
    Err(e)?;
  }

  Ok(InitRes {
    handshake_reply: HandshakeMsg {
      tx: tx_tosvc,
      rx: rx_svcevt
    },
    rx_tosvc,
    status_handle
  })
}

fn svcloop(
  mut rx_tosvc: UnboundedReceiver<ToSvcMsg>,
  status_handle: ServiceStatusHandle
) -> Result<(), Error> {
  //
  // Enter loop that waits for application state changes that should be
  // reported to the service subsystem.
  // Once the application reports that it has stopped, then break out of the
  // loop.
  //
  tracing::trace!("enter app state monitoring loop");
  loop {
    match rx_tosvc.blocking_recv() {
      Some(ev) => {
        match ev {
          ToSvcMsg::Starting(checkpoint) => {
            log::debug!("app reported that it is running");
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::StartPending,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint,
              wait_hint: SERVICE_STARTPENDING_TIME,
              process_id: None
            }) {
              log::error!(
                "Unable to set service status to 'start pending {}'; {}",
                checkpoint,
                e
              );
            }
          }
          ToSvcMsg::Started => {
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::Running,
              controls_accepted: ServiceControlAccept::STOP,
              exit_code: ServiceExitCode::Win32(0),
              checkpoint: 0,
              wait_hint: Duration::default(),
              process_id: None
            }) {
              log::error!("Unable to set service status to 'started'; {}", e);
            }
          }
          ToSvcMsg::Stopping(checkpoint) => {
            log::debug!("app is shutting down");
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::StopPending,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint,
              wait_hint: SERVICE_STOPPENDING_TIME,
              process_id: None
            }) {
              log::error!(
                "Unable to set service status to 'stop pending {}'; {}",
                checkpoint,
                e
              );
            }
          }
          ToSvcMsg::Stopped => {
            if let Err(e) = status_handle.set_service_status(ServiceStatus {
              service_type: SERVICE_TYPE,
              current_state: ServiceState::Stopped,
              controls_accepted: ServiceControlAccept::empty(),
              exit_code: ServiceExitCode::Win32(0),
              checkpoint: 0,
              wait_hint: Duration::default(),
              process_id: None
            }) {
              log::error!("Unable to set service status to 'stopped'; {}", e);
            }

            // Break out of loop to terminate service subsystem
            break;
          }
        }
      }
      None => {
        // All the sender halves have been deallocated
        log::error!("Sender endpoints unexpectedly disappeared");
        break;
      }
    }
  }

  tracing::trace!("service terminated");

  Ok(())
}


const SVCPATH: &str = "SYSTEM\\CurrentControlSet\\Services";
const PARAMS: &str = "Parameters";


pub fn read_service_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let subkey = services.open_subkey(service_name)?;
  Ok(subkey)
}

pub fn write_service_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let subkey =
    services.open_subkey_with_flags(service_name, winreg::enums::KEY_WRITE)?;
  Ok(subkey)
}

/// Create a Parameters subkey for a service.
pub fn create_service_params(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let (subkey, _disp) = asrv.create_subkey(PARAMS)?;

  Ok(subkey)
}

/// Create a Parameters subkey for a service.
pub fn get_service_params_subkey(
  service_name: &str
) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let subkey = asrv.open_subkey(PARAMS)?;

  Ok(subkey)
}

/// Load a service Parameter from the registry.
pub fn get_service_param(service_name: &str) -> Result<winreg::RegKey, Error> {
  let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
  let services = hklm.open_subkey(SVCPATH)?;
  let asrv = services.open_subkey(service_name)?;
  let params = asrv.open_subkey(PARAMS)?;

  Ok(params)
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added tests/apperr.rs.












































































































































































































































































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
mod apps;
mod err;

use std::sync::Arc;

use parking_lot::Mutex;

use qsu::rt::RunCtx;

use err::Error;

const SVCNAME: &str = "svctest";

/// Returning an `Err(AppErr)` from `init()` should return it back to the
/// application's initial call to kick off the runtime.
#[test]
fn error_from_sync_init() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MySyncService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_init()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_sync(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only init() failed
  assert!(errs.init.is_some());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.init.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Sync::init()");

  // Failing init should cause run() not to be called, but shutdown() should
  // sitll be called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(!visited.run);
  assert!(visited.shutdown);
}

/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[test]
fn error_from_sync_run() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MySyncService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_run()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_sync(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only run() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_some());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.run.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Sync::run()");

  // Failing run should not hinder shutdown() from being called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}

/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[test]
fn error_from_sync_shutdown() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MySyncService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_shutdown()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_sync(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only shutdown() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_some());

  let Error::Hello(s) = errs.shutdown.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Sync::shutdown()");

  // All callbacks should have been visited
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}


/// Returning an `Err(AppErr)` from `init()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "tokio")]
#[test]
fn error_from_tokio_init() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyTokioService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_init()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_tokio(None, handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only init() failed
  assert!(errs.init.is_some());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.init.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Tokio::init()");

  // Failing init should cause run() not to be called, but shutdown() should
  // sitll be called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(!visited.run);
  assert!(visited.shutdown);
}

/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "tokio")]
#[test]
fn error_from_tokio_run() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyTokioService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_run()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_tokio(None, handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only run() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_some());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.run.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Tokio::run()");

  // Failing run should not hinder shutdown() from being called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}


/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "tokio")]
#[test]
fn error_from_tokio_shutdown() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyTokioService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_shutdown()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_tokio(None, handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only shutdown() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_some());

  let Error::Hello(s) = errs.shutdown.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Tokio::shutdown()");

  // All callbacks should have been visited
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}


/// Returning an `Err(AppErr)` from `init()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "rocket")]
#[test]
fn error_from_rocket_init() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyRocketService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_init()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_rocket(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only init() failed
  assert!(errs.init.is_some());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.init.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Rocket::init()");

  // Failing init should cause run() not to be called, but shutdown() should
  // sitll be called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(!visited.run);
  assert!(visited.shutdown);
}

/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "rocket")]
#[test]
fn error_from_rocket_run() {
  let runctx = RunCtx::new(SVCNAME).log_init(false);

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyRocketService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_run()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_rocket(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only run() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_some());
  assert!(errs.shutdown.is_none());

  let Error::Hello(s) = errs.run.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Rocket::run()");

  // Failing run should not hinder shutdown() from being called.
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}

/// Returning an `Err(AppErr)` from `run()` should return it back to the
/// application's initial call to kick off the runtime.
#[cfg(feature = "rocket")]
#[test]
fn error_from_rocket_shutdown() {
  let runctx = RunCtx::new(SVCNAME).log_init(false);

  // Prepare a server application context which keeps track of which callbacks
  // have been called
  let visited = Arc::new(Mutex::new(apps::Visited::default()));
  let handler = Box::new(
    apps::MyRocketService {
      visited: Arc::clone(&visited),
      ..Default::default()
    }
    .fail_shutdown()
  );

  // Call RunCtx::run(), expecting an server application callback error.
  let Err(qsu::Error::SrvApp(errs)) = runctx.run_rocket(handler) else {
    panic!("Not expected Err(qsu::Error::SrvApp(_))");
  };

  // Verify that only shutdown() failed
  assert!(errs.init.is_none());
  assert!(errs.run.is_none());
  assert!(errs.shutdown.is_some());

  let Error::Hello(s) = errs.shutdown.unwrap().unwrap_inner::<Error>() else {
    panic!("Not expected Error::Hello");
  };
  assert_eq!(s, "From Rocket::shutdown()");

  // All callbacks should have been visited
  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added tests/apps/mod.rs.




















































































































































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::sync::Arc;

use parking_lot::Mutex;

use qsu::rt::{ServiceHandler, StartState, StopState, SvcEvtReader};

#[cfg(feature = "tokio")]
use qsu::rt::TokioServiceHandler;


#[cfg(feature = "rocket")]
use qsu::{
  rocket::{Build, Ignite, Rocket},
  rt::RocketServiceHandler
};

use crate::err::Error;

#[derive(Default)]
pub struct FailMode {
  init: bool,
  run: bool,
  shutdown: bool
}

#[allow(unused)]
impl FailMode {
  pub fn init(&mut self) -> &mut Self {
    self.init = true;
    self
  }
  pub fn run(&mut self) -> &mut Self {
    self.run = true;
    self
  }
  pub fn shutdown(&mut self) -> &mut Self {
    self.shutdown = true;
    self
  }
}

#[derive(Default)]
pub struct Visited {
  pub init: bool,
  pub run: bool,
  pub shutdown: bool
}

#[derive(Default)]
pub struct MySyncService {
  pub fail: FailMode,
  pub visited: Arc<Mutex<Visited>>
}

#[allow(unused)]
impl MySyncService {
  pub fn fail_init(mut self) -> Self {
    self.fail.init();
    self
  }
  pub fn fail_run(mut self) -> Self {
    self.fail.run();
    self
  }
  pub fn fail_shutdown(mut self) -> Self {
    self.fail.shutdown();
    self
  }
}


impl ServiceHandler for MySyncService {
  fn init(&mut self, _ss: StartState) -> Result<(), qsu::AppErr> {
    self.visited.lock().init = true;
    if self.fail.init {
      Err(Error::hello("From Sync::init()"))?;
    }
    Ok(())
  }

  fn run(&mut self, _set: SvcEvtReader) -> Result<(), qsu::AppErr> {
    self.visited.lock().run = true;
    if self.fail.run {
      Err(Error::hello("From Sync::run()"))?;
    }
    Ok(())
  }

  fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::AppErr> {
    self.visited.lock().shutdown = true;
    if self.fail.shutdown {
      Err(Error::hello("From Sync::shutdown()"))?;
    }
    Ok(())
  }
}


#[cfg(feature = "tokio")]
#[derive(Default)]
pub struct MyTokioService {
  pub fail: FailMode,
  pub visited: Arc<Mutex<Visited>>
}

#[cfg(feature = "tokio")]
#[allow(unused)]
impl MyTokioService {
  pub fn fail_init(mut self) -> Self {
    self.fail.init();
    self
  }
  pub fn fail_run(mut self) -> Self {
    self.fail.run();
    self
  }
  pub fn fail_shutdown(mut self) -> Self {
    self.fail.shutdown();
    self
  }
}

#[cfg(feature = "tokio")]
#[qsu::async_trait]
impl TokioServiceHandler for MyTokioService {
  async fn init(&mut self, _ss: StartState) -> Result<(), qsu::AppErr> {
    self.visited.lock().init = true;
    if self.fail.init {
      Err(Error::hello("From Tokio::init()"))?;
    }
    Ok(())
  }

  async fn run(&mut self, _set: SvcEvtReader) -> Result<(), qsu::AppErr> {
    self.visited.lock().run = true;
    if self.fail.run {
      Err(Error::hello("From Tokio::run()"))?;
    }
    Ok(())
  }

  async fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::AppErr> {
    self.visited.lock().shutdown = true;
    if self.fail.shutdown {
      Err(Error::hello("From Tokio::shutdown()"))?;
    }
    Ok(())
  }
}


#[cfg(feature = "rocket")]
#[derive(Default)]
pub struct MyRocketService {
  pub fail: FailMode,
  pub visited: Arc<Mutex<Visited>>
}

#[cfg(feature = "rocket")]
#[allow(unused)]
impl MyRocketService {
  pub fn fail_init(mut self) -> Self {
    self.fail.init();
    self
  }
  pub fn fail_run(mut self) -> Self {
    self.fail.run();
    self
  }
  pub fn fail_shutdown(mut self) -> Self {
    self.fail.shutdown();
    self
  }
}

#[cfg(feature = "rocket")]
#[qsu::async_trait]
impl RocketServiceHandler for MyRocketService {
  async fn init(
    &mut self,
    _ss: StartState
  ) -> Result<Vec<Rocket<Build>>, qsu::AppErr> {
    self.visited.lock().init = true;
    if self.fail.init {
      Err(Error::hello("From Rocket::init()"))?;
    }
    Ok(Vec::new())
  }

  async fn run(
    &mut self,
    _rockets: Vec<Rocket<Ignite>>,
    _set: SvcEvtReader
  ) -> Result<(), qsu::AppErr> {
    self.visited.lock().run = true;
    if self.fail.run {
      Err(Error::hello("From Rocket::run()"))?;
    }
    Ok(())
  }

  async fn shutdown(&mut self, _ss: StopState) -> Result<(), qsu::AppErr> {
    self.visited.lock().shutdown = true;
    if self.fail.shutdown {
      Err(Error::hello("From Rocket::shutdown()"))?;
    }
    Ok(())
  }
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added tests/err/mod.rs.































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
use std::{fmt, io};

#[derive(Debug)]
pub enum Error {
  Hello(String),
  IO(String),
  Qsu(String)
}

impl std::error::Error for Error {}

impl Error {
  pub fn hello(msg: impl ToString) -> Self {
    Error::Hello(msg.to_string())
  }
}

impl fmt::Display for Error {
  fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
    match self {
      Error::Hello(s) => {
        write!(f, "Hello error; {}", s)
      }
      Error::IO(s) => {
        write!(f, "I/O error; {}", s)
      }
      Error::Qsu(s) => {
        write!(f, "qsu error; {}", s)
      }
    }
  }
}

impl From<io::Error> for Error {
  fn from(err: io::Error) -> Self {
    Error::IO(err.to_string())
  }
}

impl From<qsu::Error> for Error {
  fn from(err: qsu::Error) -> Self {
    Error::Qsu(err.to_string())
  }
}

/*
/// Convenience converter used to pass an application-defined errors from the
/// qsu inner runtime back out from the qsu runtime.
impl From<Error> for qsu::Error {
  fn from(err: Error) -> qsu::Error {
    qsu::Error::app(err)
  }
}
*/

impl From<Error> for qsu::AppErr {
  fn from(err: Error) -> qsu::AppErr {
    qsu::AppErr::new(err)
  }
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Added tests/initrunshutdown.rs.













































































































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
mod apps;
mod err;

use std::sync::Arc;

use parking_lot::Mutex;

use qsu::rt::RunCtx;

const SVCNAME: &str = "svctest";

/// Make sure that `init()`, `run()` and `shutdown()` are called for the sync
/// case.
#[test]
fn all_sync_callbacks() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  let visited = apps::Visited::default();

  assert!(!visited.init);
  assert!(!visited.run);
  assert!(!visited.shutdown);

  let visited = Arc::new(Mutex::new(visited));

  let handler = Box::new(apps::MySyncService {
    visited: Arc::clone(&visited),
    ..Default::default()
  });

  let Ok(_) = runctx.run_sync(handler) else {
    panic!("run_sync() unexpectedly failed");
  };

  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}

/// Make sure that `init()`, `run()` and `shutdown()` are called for the tokio
/// case.
#[cfg(feature = "tokio")]
#[test]
fn all_tokio_callbacks() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  let visited = apps::Visited::default();

  assert!(!visited.init);
  assert!(!visited.run);
  assert!(!visited.shutdown);

  let visited = Arc::new(Mutex::new(visited));

  let handler = Box::new(apps::MyTokioService {
    visited: Arc::clone(&visited),
    ..Default::default()
  });

  let Ok(_) = runctx.run_tokio(None, handler) else {
    panic!("run_sync() unexpectedly failed");
  };

  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}


/// Make sure that `init()`, `run()` and `shutdown()` are called for the Rocket
/// case.
#[cfg(feature = "rocket")]
#[test]
fn all_rocket_callbacks() {
  let runctx = RunCtx::new(SVCNAME).test_mode();

  let visited = apps::Visited::default();

  assert!(!visited.init);
  assert!(!visited.run);
  assert!(!visited.shutdown);

  let visited = Arc::new(Mutex::new(visited));

  let handler = Box::new(apps::MyRocketService {
    visited: Arc::clone(&visited),
    ..Default::default()
  });

  let Ok(_) = runctx.run_rocket(handler) else {
    panic!("run_sync() unexpectedly failed");
  };

  let visited = Arc::into_inner(visited)
    .expect("Unable to into_inner Arc")
    .into_inner();
  assert!(visited.init);
  assert!(visited.run);
  assert!(visited.shutdown);
}

// vim: set ft=rust et sw=2 ts=2 sts=2 cinoptions=2 tw=79 :

Changes to www/changelog.md.

1
2
3
4
5









































6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62


63
64
65
66
67
68





+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
















-
-






# Change log

## [Unreleased]

### Added

### Changed

### Removed

---

## [0.0.3] - 2023-10-23

### Added

- Introduce an `AppErr` type that can wrap application-specific errors that
  the service runtime callbacks return for the `Err()` case.
- Make the `Error::App()` take two values: An `CbOrigin` that is used to
  identify which callback returned an error, and an `AppErr` containing the
  application-specific error.

### Changed

- Make it possible to intsruct LumberJack not to initialize logging/tracing
  (because otherwise tests that initialize the _qsu_ runtime will panic).
- Major refactoring.  Moved runtime to its own `rt` submodule, and put it
  behind a (default) `rt` feature.
- Put the tokio server application runtime behind a `tokio` feature.
  Note: qsu still depends on tokio without tokio runtime support (albeit only
  with the `sync` feature for channels).

### Removed

- `leak_default_service_name()` was removed because it no longer serves a
  purpose.
- The `signals` module is no longer public.  (It still exists, but is
  considered an implementation detail).



---

## [0.0.2] - 2023-10-19

### Added

- Added some optional clap integration convenience functionality, that can be
  enabled using the 'clap' feature.
- Added `SvcEvt::Terminate`.
- Argument parser allows setting default service logging/tracing settings when
  registering service.
- High-level argument parser that wraps service registration, deregistration,
  and running has been integrated into the qsu core library.

### Changed

- SIGTERM/Ctrl+Break/Close sends `SvcEvt::Terminate` rather than
  `SvcEvt::Shutdown`.
- `eventlog` errors are mapped to `Error::LumberJack` (instead of
  `Error::SubSystem`).

### Removed

---

## [0.0.1] - 2023-10-15

Initial release.

Changes to www/index.md.

75
76
77
78
79
80
81










82
83
84
85
86
87
88
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98







+
+
+
+
+
+
+
+
+
+







    vairable.
  - If key `TraceFile` and `TraceLevel` correspond to the environment
    variables `TRACE_FILE` and `TRACE_LEVEL`.  Both these must be
    configured in the registry to enable tracing.
- Logging through `log` will log to the Windows Events Log.
- Logging using `trace` will write trace logs to a file.


## When to use it, and when not to use it

To be frank, if you're writing a systemd-only service, then the value of using
_qsu_ is negligible (or it might even be wasteful to pull in _qsu_).  The
benefits of using _qsu_ will be noticed mostly when targeting the Windows
Services subsystem.  But mostly the benefits become apparent when targetting
multiple service subsystems in the same project, and wanting to have a similar
API when developing non-async and async services.


## Feature labels in documentation

The crate's documentation uses automatically generated feature labels, which
currently requires nightly featuers.  To build the documentation locally use:

```
102
103
104
105
106
107
108



109
110
111
112
113
114
115
116
117
118
119
120
121
122
123

124
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130

131
132
133
134

135
136







+
+
+









-




-
+

- The repository contains three different in-tree examples:
  - `hellosvc` is a "sync" (read: non-`async`) server application which dumps
    logs and traces every 30 seconds until the service is terminated.
  - `hellosvc-tokio` is the same as `hellosvc`, but is an `async` server that
    runs on top of tokio.
  - `hellosvc-rocket` is a Rocket server that writes logs and traces each time
    a request it made to the index page.
- The [staticrocket](https://crates.io/crates/staticrocket) crate uses qsu.
  In particular it implements the `Rocket` service handler, and it adds a
  custom application-specific subcommand to the `ArgParser`.


## Change log

The details of changes can always be found in the timeline, but for a
high-level view of changes between released versions there's a manually
maintained [Change Log](./changelog.md).



## Project status

This crate is a work-in-progress, still in early prototyping stage.  It works
for basic use-cases, but the API and some of the semantics are likely to
change. The error handling needs work.
change.