]> git.conrad-schecker.de Git - fj-contracting.git/commitdiff
initial commit
authorConrad Schecker <redacted>
Mon, 9 Feb 2026 09:13:15 +0000 (10:13 +0100)
committerConrad Schecker <redacted>
Mon, 9 Feb 2026 09:13:15 +0000 (10:13 +0100)
17 files changed:
Cargo.toml [new file with mode: 0644]
LICENSE [new file with mode: 0644]
README.md [new file with mode: 0644]
generate_hrg.sh [new file with mode: 0755]
graphs/graph.txt [new file with mode: 0644]
src/bin/bench.rs [new file with mode: 0644]
src/bin/fixed_graph.rs [new file with mode: 0644]
src/bin/hrg.rs [new file with mode: 0644]
src/fj_contract.rs [new file with mode: 0644]
src/friedkin_johnson.rs [new file with mode: 0644]
src/graphgenerator.rs [new file with mode: 0644]
src/graphloader.rs [new file with mode: 0644]
src/instancegenerator.rs [new file with mode: 0644]
src/instancesimulator.rs [new file with mode: 0644]
src/instancesolver.rs [new file with mode: 0644]
src/lib.rs [new file with mode: 0644]
src/statistics.rs [new file with mode: 0644]

diff --git a/Cargo.toml b/Cargo.toml
new file mode 100644 (file)
index 0000000..79f49b5
--- /dev/null
@@ -0,0 +1,13 @@
+[package]
+name = "fj-contracting"
+version = "0.3.0"
+edition = "2024"
+
+[dependencies]
+clap = { version = "4.5.40", features = ["derive"] }
+itertools = "0.14.0"
+nalgebra = "0.34.0"
+rand = "0.9.2"
+rand_distr = "0.5.1"
+rayon = "1.11.0"
+
diff --git a/LICENSE b/LICENSE
new file mode 100644 (file)
index 0000000..81b492e
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2026 conradschecker
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/README.md b/README.md
new file mode 100644 (file)
index 0000000..4a3d7a4
--- /dev/null
+++ b/README.md
@@ -0,0 +1,55 @@
+This is an implementation of the FJ contracting problem using the Rust programming languange.
+It was used for making experiments about the performance of greedy algorihms described in the paper "Contract Design in Opinion Formation Games".
+
+## Build and Run
+
+The easiest way to run this application is to use [Cargo](https://doc.rust-lang.org/cargo/getting-started/installation.html), the package manager for Rust.
+After cloning the repository, the project can be built using 
+```sh
+cargo build
+``` 
+For performance reasons, using the `--release` flag is highly recommended in order to enable compiler optimizations.
+
+There are two main CLIs in this project.
+- `fixed_graph` for simulations on a fixed graphs with varying numbers of influencers.
+- `hrg` for simulations on hyperbolic random graphs (or any set of graphs graphs).
+
+Additionally, there is a binary `bench` which provides a testing environment for identifying bottlenecks of the implementation.
+All CLIs provide additional information about the required parameters using the `-h` flag.
+Note that you can also build and run the CLIs directly using cargo, e.g., by 
+```sh
+cargo run --release --bin fixed_graph -- -h
+```
+
+There is an extensive documentation that can be built using 
+```sh
+cargo doc --no-deps
+```
+
+## Reproducing Results
+
+As described in the paper, simulations were performed on several graphs from the [Networkrepository](https://networkrepository.com/) as well as on hyperbolic random graphs, for which we used the implementation from [chistopher/girgs](https://github.com/chistopher/girgs).
+We provide a shell script that invokes their CLI `genhrg` for batch generation of hyperbolic random graphs of the same size.
+Details can be found in the comments of the script (`./generate_hrg.sh`).
+
+Once all necessary hyperbolic random graphs have been generated, the simulations can be started with the following command
+```sh
+mkdir -p ./results/
+
+cargo run --release --bin hrg -- --input-directory ./graphs/ --output-directory ./results/ --action-set-generator BinaryP0Proportional --influencer-selector MostInfluential --influencer-count-param-a 1 --influencer-count-param-b 2 --n-min 256 --n-max 4096
+```
+This will run the simulations (with 128 repetitions) for graphs with sizes n = 256, 512, ..., 4096, using `BinaryP(1 - 1/n)` for generating action sets, and selecting `log(n) - 2` most influential agents as influencers in each run.
+The necessary graphs have to be in the `./graphs/` directory.
+
+You can also store fixed graphs in that directory.
+For example, if you want to run the simulation on `socfb-Caltech36.mtx` in the directory `./graphs/`, you can run simulations by the following command:
+```sh
+mkdir -p ./results/
+
+cargo run --release --bin fixed_graph -- --input-file ./graphs/socfb-Caltech36.mtx --output-directory ./results/ --action-set-generator GeometricFirstvalueZero --action-set-size 64 --influencer-selector MostInfluential
+```
+This will generate an action set using `GeometricFirstvalueZero(64)`, and select most influential agents as influencers.
+By default, there will be 2 to 12 influencers, and for each influencer count, 128 repetitions will be made.
+
+Note that simulations take some time already on relatively small graphs with ~1000 nodes.
+By default, the simulations are parallelized.
diff --git a/generate_hrg.sh b/generate_hrg.sh
new file mode 100755 (executable)
index 0000000..96c8ba6
--- /dev/null
@@ -0,0 +1,31 @@
+#!/bin/sh
+
+### Usage ###
+
+# ./generate_hrg.sh 1024 128
+
+# This generates 128 hyperbolic random graphs, each with 1024 nodes.
+
+
+### Setup ###
+
+# path to executable (hyperbolic random graph generator from https://github.com/chistopher/girgs)
+bin_path=~/girgs/build/genhrg
+
+# path to output directory
+output_dir=./graphs/
+
+###
+
+
+n=$1
+repetitions=$2
+rmax=($repetitions - 1)
+
+for r in $(seq 0 $rmax)
+do
+       outputfile=$output_dir"hrg_"$n"_"$r"_edges"
+       $bin_path -n $n -edge 1 -file $outputfile -rseed $RANDOM -aseed $RANDOM -sseed $RANDOM
+       sed -i '1s/^/% /' $outputfile".txt"
+done           
+
diff --git a/graphs/graph.txt b/graphs/graph.txt
new file mode 100644 (file)
index 0000000..63bd5ac
--- /dev/null
@@ -0,0 +1,11 @@
+%%% This is a simple K5 with node indices 1,...,5 %%%
+1 2
+1 3
+1 4
+1 5
+2 3
+2 4
+2 5
+3 4
+3 5
+4 5
diff --git a/src/bin/bench.rs b/src/bin/bench.rs
new file mode 100644 (file)
index 0000000..f7ff0ea
--- /dev/null
@@ -0,0 +1,147 @@
+//! Evaluate the performance of the simulation.
+//!
+//! This is a CLI with the main purpose of providing a testing environment.
+//! It extends a given start graph to a specified size, according to the Barabási-Albert model of random graphs.
+//! Then, a random instance of the FJ contracting problem is created from this extended graph and the specified parameters.
+//! Afterwards, the instance is solved using different methods (exact and approximative) that are implemented.
+//! There are extensive measurements of the running time of different parts of the simulation, which enables identification of bottlenecks.
+//!
+//! ## Parameters
+//!
+//! Required:
+//! - `--input-file` : Path to a text file that contains a (small) start graph.
+//! - `--n-extend` : Number of nodes that should be added to the start graph.
+//! - `--mu` : Number of edges that should be added for each additional node.
+//! - `--action-set-generator` : [`SimpleActionSetGenerator`] that should be used. Possible Choices:
+//!    - `DiscreteUniform` (requires `action-set-size` to be specified)
+//!    - `Geometric` (requires `action-set-size` to be specified)
+//!    - `GeometricFirstvalueZero` (requires `action-set-size` to be specified)
+//!    - `BinaryBeta` (requires `beta` to be specified)
+//!    - `BinaryP0` (requires `p0` to be specified)
+//!    - `BinaryP0Proportional` (uses `BinaryP0` with `p0 = 1 - 1/n`, where `n` is the network size)
+//! - `--influencer-selector` : [`SimpleAgentSelector`] that should be used for selecting influencers. Possible Choices:
+//!    - Random
+//!            - HighestOutDegree
+//!            - MostInfluential
+//! - `--influencer-count` : The number of influencers.
+//!
+//! Optional:
+//! - `--minimum_node_id` : Node with the smallest id in the input graph. (Default: 1).
+//! - `--action_set_size` : If the action set generator requires a size, this option must be set to an integer. Otherwise, the program will panic.
+//! - `--beta` :  If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//! - `--p0` : If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//!
+//! ## Example
+//! ```
+//! ./bench --input-file ./graph.txt --n-extend 1019 --mu 5 --action-set-generator GeometricFirstvalueZero --action-set-size 64 --influencer-count 10 --influencer-selector MostInfluential
+//! ```
+
+use fj_contracting::instancegenerator;
+use fj_contracting::instancegenerator::{RewardType,SimpleAgentSelector,SimpleActionSetGenerator};
+use fj_contracting::instancesolver::{InstanceWrapper,SolverResult,SimpleSolver,HeuristicSolver};
+use fj_contracting::graphloader;
+use fj_contracting::graphgenerator;
+
+use clap::Parser;
+use std::time::Instant;
+use std::str::FromStr;
+
+#[derive(Parser, Debug)]
+#[command(version, about, long_about = None)]
+struct Args {
+       /// Path to a text file that contains a (small) start graph.
+       #[arg(long)]
+    input_file: String,
+    
+    /// Node with the smallest id in the input graph.
+    #[arg(long, default_value_t = 1)]
+    minimum_node_id: usize,
+    
+    /// Number of nodes that should be added to the start graph.
+       #[arg(long)]
+       n_extend: usize,
+    
+    /// Number of edges that should be added for each additional node.
+    #[arg(long)]
+    mu: usize,
+
+       /// SimpleActionSetGenerator that should be used. 
+       /// Possible Choices:
+       /// DiscreteUniform, Geometric, GeometricFirstvalueZero, BinaryBeta, BinaryP0, BinaryP0Proportional
+       #[arg(long)]
+    action_set_generator: String,
+    
+    /// If the action set generator requires a size, this option must be set to an integer.
+    #[arg(long)]
+    action_set_size: Option<usize>,
+    
+    /// If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    beta: Option<f32>,
+    
+    /// If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    p0: Option<f32>,
+    
+     /// SimpleAgentSelector that should be used for selecting influencers.
+    /// Possible Choices: Random, HighestOutDegree, MostInfluential
+    #[arg(long)]
+    influencer_selector: String,
+    
+    /// The number of influencers.
+    #[arg(long)]
+    influencer_count: usize,
+}
+
+fn main() {
+       let separator = ' ';
+       let args = Args::parse();
+       let generator_type = SimpleActionSetGenerator::from_str(&args.action_set_generator, args.action_set_size, args.beta, args.p0)
+               .expect("Action set generator unknown.");
+       let reward_type = RewardType::Unweighted;
+       let influencer_selector = SimpleAgentSelector::from_str(&args.influencer_selector)
+               .expect("Influencer selector unknown.");
+       let heuristic_solvers = vec![HeuristicSolver::GreedyImprovementsMCKPSorting, HeuristicSolver::GreedyMCKPOnlyKnapsack, HeuristicSolver::BestSingleAgentContract, HeuristicSolver::GreedyMCKP];
+       
+       // Load Graph from file.
+       let (n0, startedges) = graphloader::try_from_file(&args.input_file, separator, args.minimum_node_id).unwrap();
+       println!("Obtained graph with {n0} nodes and {} edges.", startedges.len());
+       let extended_edgelist = graphgenerator::ba_edgelist(args.n_extend, args.mu, &startedges).unwrap();
+       let n = n0 + args.n_extend;
+       let adjacency_vec = graphloader::edgelist_to_adjacency_vec(n, &extended_edgelist);
+       println!("Preferential Attachment of {} additional nodes and {} edges for each attached node completed.", args.n_extend, args.mu);
+       
+       // Create FJ Network
+       let stime = Instant::now();
+       let network = instancegenerator::random_network_from_graph(n, &adjacency_vec);
+       println!("Network with {} nodes created in {:.2?} (mostly matrix inverse computation).", network.size(), stime.elapsed());
+       
+       // Create and wrap FJ Contract Instance
+       let stime = Instant::now();
+       let instance = instancegenerator::random_instance_from_network(network, &reward_type, &generator_type);
+       let instancewrapper = InstanceWrapper::new(instance, &influencer_selector, args.influencer_count);
+       println!("Instance with {} agents created and wrapped in {:.2?} (mostly breakpopint computation; obtained {} breakpoints).", args.influencer_count, stime.elapsed(), instancewrapper.get_breakpoint_sum_for_influencers());
+       
+       let opt_states_to_check: usize = instancewrapper.get_breakpoint_count_vec().iter().product();
+       
+       if opt_states_to_check > 5000000 {
+               println!("Computing OPT: Find the maximum utility yielded by one out of ~{} million states...", opt_states_to_check / 1000000);
+       } else if opt_states_to_check > 5000 {
+               println!("Computing OPT: Find the maximum utility yielded by one out of ~{} thousand states...", opt_states_to_check / 1000);
+       } else {
+               println!("Computing OPT: Find the maximum utility yielded by one out of {} states...", opt_states_to_check);
+       }
+       
+       let stime = Instant::now();
+       let opt = SolverResult::obtain(&instancewrapper, &SimpleSolver::Optimal, None);
+       let duration = stime.elapsed();
+       println!(" > OPT = {:.5?} with {} contract(s), alpha_sum = {}, reward_contribution = {}, obtained in {duration:.2?} (speed: ~{:.2?}M states per second).", opt.utility, opt.contract_count, opt.alpha_sum, opt.reward_contribution, opt_states_to_check as u128 / duration.as_micros());
+               
+       for heuristic in &heuristic_solvers {
+
+               let stime = Instant::now();
+               let heuristic_result = SolverResult::obtain(&instancewrapper, heuristic, Some(opt.utility));
+               let duration = stime.elapsed();
+               println!(" > {heuristic:?} = {:.5?} with {} contract(s), alpha_sum = {}, reward_contribution = {}, obtained in {duration:.2?}.", heuristic_result.utility, heuristic_result.contract_count, heuristic_result.alpha_sum, heuristic_result.reward_contribution);
+       }
+}
diff --git a/src/bin/fixed_graph.rs b/src/bin/fixed_graph.rs
new file mode 100644 (file)
index 0000000..7315706
--- /dev/null
@@ -0,0 +1,121 @@
+//! Perform a repeated simulation on random instances generated from a fixed graph.
+//!
+//! This CLI is used for performing the simulation on an fixed input graph provided by a text file.
+//! For each number of influencers from `min_influencer_count` to `max_influencer_count`, the simulation runs multiple times.
+//! In each repetition, an instance of the FJ contracting problem is randomly generated from the input graph,
+//! using the given parameters. Then the instance is solved both optimally as well as with all implemented heuristics.
+//! A statistical analysis of the obtained revenue and approximation ratios over all repetitions is performed and written to file.
+//!
+//! ## Parameters
+//!
+//! Required:
+//! - `--input-file` : Path to a text file that contains a graph on which the simulation is performed.
+//! - `--output_directory` : Path to a directory where the output files will be placed. Ensure write access to that directory.
+//! - `--action-set-generator` : [`SimpleActionSetGenerator`] that should be used. Possible Choices:
+//!    - `DiscreteUniform` (requires `action-set-size` to be specified)
+//!    - `Geometric` (requires `action-set-size` to be specified)
+//!    - `GeometricFirstvalueZero` (requires `action-set-size` to be specified)
+//!    - `BinaryBeta` (requires `beta` to be specified)
+//!    - `BinaryP0` (requires `p0` to be specified)
+//!    - `BinaryP0Proportional` (uses `BinaryP0` with `p0 = 1 - 1/n`, where `n` is the network size)
+//! - `--influencer-selector` : [`SimpleAgentSelector`] that should be used for selecting influencers. Possible Choices:
+//!    - Random
+//!            - HighestOutDegree
+//!            - MostInfluential
+//!
+//! Optional:
+//! - `--repetitions` : Number of repeated simulations performed for each influencer count. (Default: 128).
+//! - `--minimum_node_id` : Node with the smallest id in the input graph. (Default: 1).
+//! - `--min_influencer_count` : Minimum number of influencers that should be simulated. (Default: 2). 
+//! - `--max_influencer_count` : Maximum number of influencers that should be simulated. (Default: 12).
+//! - `--action_set_size` : If the action set generator requires a size, this option must be set to an integer. Otherwise, the program will panic.
+//! - `--beta` :  If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//! - `--p0` : If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//!
+//! ## Example
+//! ```
+//! ./fixed_graph --input-file ./graph.txt --output-directory ./results/ --action-set-generator GeometricFirstvalueZero --action-set-size 64 --influencer-selector MostInfluential
+//! ```
+
+use fj_contracting::instancesimulator::Simulator;
+use fj_contracting::instancegenerator::{RewardType,SimpleAgentSelector,SimpleActionSetGenerator};
+use fj_contracting::instancesolver::HeuristicSolver;
+use fj_contracting::graphloader;
+
+use clap::Parser;
+use std::str::FromStr;
+use std::time::Instant;
+
+#[derive(Parser, Debug)]
+#[command(version, about, long_about = None)]
+struct Args {
+       /// Number of repeated simulations performed for each influencer count. 
+       #[arg(long, default_value_t = 128)]
+    repetitions: usize,
+    
+    /// Path to a text file that contains a graph on which the simulation is performed.
+       #[arg(long)]
+    input_file: String,
+    
+       /// Node with the smallest id in the input graph.
+    #[arg(long, default_value_t = 1)]
+    minimum_node_id: usize,
+
+       /// Path to a directory where the output files will be placed. Ensure write access to that directory.
+       #[arg(long)]
+       output_directory: String,
+
+       /// SimpleActionSetGenerator that should be used. 
+       /// Possible Choices:
+       /// DiscreteUniform, Geometric, GeometricFirstvalueZero, BinaryBeta, BinaryP0, BinaryP0Proportional
+       #[arg(long)]
+    action_set_generator: String,
+    
+    /// If the action set generator requires a size, this option must be set to an integer.
+    #[arg(long)]
+    action_set_size: Option<usize>,
+    
+    /// If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    beta: Option<f32>,
+    
+    /// If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    p0: Option<f32>,
+    
+    /// SimpleAgentSelector that should be used for selecting influencers.
+    /// Possible Choices: Random, HighestOutDegree, MostInfluential
+    #[arg(long)]
+    influencer_selector: String,
+    
+    /// The minimum number of influencers.
+    #[arg(long, default_value_t = 2)]
+    min_influencer_count: usize,
+    
+    /// The maximum number of influencers.
+    #[arg(long, default_value_t = 12)]
+    max_influencer_count: usize,
+}
+
+fn main() {
+       let separator = ' ';
+       let args = Args::parse();
+       let generator_type = SimpleActionSetGenerator::from_str(&args.action_set_generator, args.action_set_size, args.beta, args.p0)
+               .expect("Action set generator unknown.");
+       let reward_type = RewardType::Unweighted;
+       let influencer_selector = SimpleAgentSelector::from_str(&args.influencer_selector)
+               .expect("Influencer selector unknown.");
+       let heuristic_solvers = vec![HeuristicSolver::GreedyImprovementsMCKPSorting, HeuristicSolver::GreedyMCKP];
+       
+       // Load Graph from file.
+       let (n, edgevec) = graphloader::try_from_file(&args.input_file, separator, args.minimum_node_id).unwrap();
+       let adjacency_vec = graphloader::edgelist_to_adjacency_vec(n, &edgevec);
+       println!("Input graph has {n} nodes and {} edges.", &edgevec.len());
+       let filename_base = format!("fixed_n={n}");
+       
+       // Simulate
+       let stime = Instant::now();
+       let mut sim = Simulator::init(n, adjacency_vec, reward_type, generator_type, influencer_selector, heuristic_solvers, &args.output_directory, &filename_base);
+       sim.run_parallel(args.min_influencer_count, args.max_influencer_count, args.repetitions);
+       println!("Total runtime: {:.2?}", stime.elapsed());
+}
diff --git a/src/bin/hrg.rs b/src/bin/hrg.rs
new file mode 100644 (file)
index 0000000..7c8a571
--- /dev/null
@@ -0,0 +1,194 @@
+//! Perform a repeated simulation on random instances generated with a set of (hyperbolic random) graphs.
+//!
+//! This CLI is used for performing the simulation on a set of graphs, typically retrieved from an external source.
+//! Each graph has to be provided in a seperate file in the given directory with a certain file name structure.
+//! For each repetitions, a different graph is retrieved and an instance of the FJ contracting problem
+//! is randomly generated using that graph and the given parameters. 
+//! Then the instance is solved both optimally as well as with all implemented heuristics.
+//! A statistical analysis of the obtained revenue and approximation ratios over all repetitions is performed and written to file.
+//! 
+//! The network size may vary from `n_min` to `n_max`, and all sizes have to be powers of two. 
+//! For example, if `n_min = 256` and `n_max 2048`, then the network size are 256, 512, 1024 and 2048.
+//! The number of repetitions specifies how many graphs of each network size are required to exist.
+//! The file names are following the format "hrg\_*n*\_*i*\_edges.txt", 
+//! where *n* and *i* are integers specifying the network size and the iteration, respectively. For example, 
+//! if `repetitions = 128`, then for each network size *n*, the files "hrg\_*n*\_0\_edges.txt", ..., "hrg\_*n*\_127\_edges.txt"
+//! have to exist in the input directory
+//!
+//! The number of influencers is given by `a * log_2(n) - b`, where `a` and `b` are parameters and `n` is the size of the network.
+//!
+//! ## Parameters
+//!
+//! Required:
+//! - `--input-directory` : Path to a directory in which the input graphs are contained as text files with a certain file name structure.
+//! - `--output_directory` : Path to a directory where the output files will be placed. Ensure write access to that directory.
+//! - `--action-set-generator` : [`SimpleActionSetGenerator`] that should be used. Possible Choices:
+//!    - `DiscreteUniform` (requires `action-set-size` to be specified)
+//!    - `Geometric` (requires `action-set-size` to be specified)
+//!    - `GeometricFirstvalueZero` (requires `action-set-size` to be specified)
+//!    - `BinaryBeta` (requires `beta` to be specified)
+//!    - `BinaryP0` (requires `p0` to be specified)
+//!    - `BinaryP0Proportional` (uses `BinaryP0` with `p0 = 1 - 1/n`, where `n` is the network size)
+//! - `--influencer-selector` : [`SimpleAgentSelector`] that should be used for selecting influencers. Possible Choices:
+//!    - Random
+//!            - HighestOutDegree
+//!            - MostInfluential
+//! - `--influencer_count_param_a` : Integer parameter of the function `a * log_2(n) - b` for the influencer count.
+//! - `--influencer_count_param_b` : Integer parameter of the function `a * log_2(n) - b` for the influencer count.
+//! - `--n_min` : The minimum network size. Must be a power of two.
+//! - `--n_max` : The maxmimum network size. Must be a power of two.
+//!
+//! Optional:
+//! - `--repetitions` : Number of repeated simulations performed for each influencer count. (Default: 128).
+//! - `--minimum_node_id` : Node with the smallest id in the input graph. (Default: 0).
+//! - `--action_set_size` : If the action set generator requires a size, this option must be set to an integer. Otherwise, the program will panic.
+//! - `--beta` :  If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//! - `--p0` : If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1. Otherwise, the program will panic.
+//!
+//! ## Example
+//! ```
+//! ./hrg --input-directory ./graphs/ --output-directory ./results/ --action-set-generator GeometricFirstvalueZero --action-set-size 64 --influencer-selector MostInfluential --influencer-count-param-a 1 --influencer-count-param-b 7 --n-min 256 --n-max 4096
+//! ```
+
+use fj_contracting::instancesimulator::MultiSimulator;
+use fj_contracting::instancegenerator::{RewardType,SimpleAgentSelector,SimpleActionSetGenerator};
+use fj_contracting::instancesolver::HeuristicSolver;
+use fj_contracting::graphloader;
+
+use clap::Parser;
+use std::str::FromStr;
+use std::time::Instant;
+
+#[derive(Parser, Debug)]
+#[command(version, about, long_about = None)]
+struct Args {
+       /// Number of repeated simulations performed for each influencer count. 
+       #[arg(long, default_value_t = 128)]
+    repetitions: usize,
+
+       /// Node with the smallest id in the input graph.
+    #[arg(long, default_value_t = 0)]
+    minimum_node_id: usize,
+    
+       /// Path to a directory in which the input graphs are contained as text files with a certain file name structure.
+       #[arg(long)]
+    input_directory: String,
+       
+       /// Path to a directory where the output files will be placed. Ensure write access to that directory.
+       #[arg(long)]
+       output_directory: String,
+
+       /// SimpleActionSetGenerator that should be used. 
+       /// Possible Choices:
+       /// DiscreteUniform, Geometric, GeometricFirstvalueZero, BinaryBeta, BinaryP0, BinaryP0Proportional
+       #[arg(long)]
+    action_set_generator: String,
+    
+    /// If the action set generator requires a size, this option must be set to an integer.
+    #[arg(long)]
+    action_set_size: Option<usize>,
+    
+    /// If the action set generator requires a parameter beta, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    beta: Option<f32>,
+    
+    /// If the action set generator requires a parameter p0, this option must be set to a float between 0 and 1.
+    #[arg(long)]
+    p0: Option<f32>,
+    
+    /// SimpleAgentSelector that should be used for selecting influencers.
+    /// Possible Choices: Random, HighestOutDegree, MostInfluential
+    #[arg(long)]
+    influencer_selector: String,
+
+    /// Parameter of the function `a * log_2(n) - b` that gives the number of influencers for a network of size n.
+    #[arg(long)]
+    influencer_count_param_a: u32,
+    
+       /// Parameter of the function `a * log_2(n) - b` that gives the number of influencers for a network of size n.
+    #[arg(long)]
+    influencer_count_param_b: u32,
+    
+    /// The minimum network size. Must be a power of two.
+    #[arg(long)]
+    n_min: usize,
+    
+    /// The maxmimum network size. Must be a power of two.
+    #[arg(long)]
+    n_max: usize,
+}
+
+
+fn influencer_count(influencer_count_param_a: u32, influencer_count_param_b: u32, logsize: u32) -> usize {
+       return (influencer_count_param_a * logsize - influencer_count_param_b) as usize
+}
+
+fn main() {
+       let separator = ' ';
+       let args = Args::parse();
+       let generator_type = SimpleActionSetGenerator::from_str(&args.action_set_generator, args.action_set_size, args.beta, args.p0)
+               .expect("Action set generator unknown.");
+       let reward_type = RewardType::Unweighted;
+       let influencer_selector = SimpleAgentSelector::from_str(&args.influencer_selector)
+               .expect("Influencer selector unknown.");
+       let heuristic_solvers = vec![HeuristicSolver::GreedyImprovementsMCKPSorting, HeuristicSolver::GreedyMCKP];
+       let filename_base = format!("hrg_agentnumber={}log2(n)-{}", args.influencer_count_param_a, args.influencer_count_param_b);
+       
+       let influencer_count_alpha = args.influencer_count_param_a;
+       let influencer_count_beta = args.influencer_count_param_b;
+       let n_min_log = args.n_min.ilog2();
+       let n_max_log = args.n_max.ilog2();
+       
+       let min_influencer_count = influencer_count(influencer_count_alpha, influencer_count_beta, n_min_log);
+       let max_influencer_count = influencer_count(influencer_count_alpha, influencer_count_beta, n_max_log);
+       
+       println!("Initialize simulation with the following parameters:");
+       println!(" > Network sizes from {} to {}.", 2_usize.pow(n_min_log), 2_usize.pow(n_max_log));
+       println!(" > action-set-generator={:?}", &generator_type);
+       println!(" > reward-type={:?}", &reward_type);
+       println!(" > influencer-selector={:?}", &influencer_selector);
+       println!(" > influencer-count(n)={}log2(n)-{}, i.e., from {} to {}", influencer_count_alpha, influencer_count_beta, min_influencer_count, max_influencer_count);
+       println!(" > heuristics {:?}", &heuristic_solvers);
+       
+       if min_influencer_count <= 0 {
+               eprintln!("Error: The influencer count has to be strictly positive for all networks!");
+       } else {
+       
+               let stime = Instant::now();
+               let mut sim = MultiSimulator::init(reward_type, generator_type, influencer_selector, heuristic_solvers, &args.output_directory, &filename_base);
+               
+               for logsize in n_min_log..=n_max_log {
+                       let size = 2_usize.pow(logsize);
+                       // Load graphs from file.
+                       let stime_graphloading = Instant::now();
+                       println!("Load {} hyperbolic random graphs of size {size} from disk ...", args.repetitions);
+                       let nodes_agents_adjacency_vec_list: Vec<Vec<bool>> = (0..args.repetitions).into_iter()
+                               .map(|repetition| {
+                                       let filename = String::from(&format!("{}/hrg_{}_{}_edges.txt", &args.input_directory, size, repetition));
+                                       let (_, edgevec) = graphloader::try_from_file(&filename, separator, 0).unwrap();
+                                       let adjacency_vec = graphloader::edgelist_to_adjacency_vec(size, &edgevec);
+                                       adjacency_vec
+                               })
+                               .collect();
+                       let influencer_count = influencer_count(influencer_count_alpha, influencer_count_beta, logsize);
+                       println!("... completed in {:.2?}.", stime_graphloading.elapsed());
+
+                       // Simulate
+                       sim.run_parallel_on_alternative_graphs(size, influencer_count, &nodes_agents_adjacency_vec_list);
+               }
+               println!("Total runtime: {:.2?}.", stime.elapsed());
+       }
+}
+
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+       #[test]
+       fn test_influencer_count() {
+               assert_eq!(influencer_count(1, 4, 5), 1); // a = 1, b = 4, log(n) = 5 ... => a * log(n) - b = 1
+               assert_eq!(influencer_count(2, 3, 6), 9); // a = 2, b = 3, log(n) = 6 ... => a * log(n) - b = 9
+       }
+}
+  
diff --git a/src/fj_contract.rs b/src/fj_contract.rs
new file mode 100644 (file)
index 0000000..a1421f5
--- /dev/null
@@ -0,0 +1,380 @@
+use crate::friedkin_johnson::FJNetwork;
+use nalgebra::DVector;
+
+/// Represent instances of the FJ contracting problem. 
+///
+/// Each instance consists of a [`FJNetwork`], importance values of the agents/nodes, and [`ActionSet`]s of the agents/nodes.
+#[derive(Debug)]
+pub struct FJContractModel {
+       network: FJNetwork, 
+       agent_importance: DVector<f32>, 
+       action_sets: Vec<ActionSet>,
+}
+
+/// Store possible actions of an agent in the [`FJContractModel`]
+///
+/// Each action is described an opinion value, a probability, and comes with some costs.
+#[derive(Clone,Debug)]
+pub struct ActionSet {
+       pub values: DVector<f32>,
+       pub probabilities: DVector<f32>,
+       pub costs: DVector<f32>,
+}
+
+/// Represent actions that an agent takes in the [`FJContractModel`]
+#[derive(Copy,Clone,Debug,PartialEq)]
+pub enum AgentAction {
+       /// The agent chooses an initial opinion from his [`ActionSet`]. The index of this opinion is given.
+       Deterministic(usize),
+       /// The agent randomizes.
+       Random
+}
+
+/// Trait for generating an [`ActionSet`]
+pub trait ActionSetGenerator {
+       fn try_generate(&self, param: f32) -> Result<ActionSet, &'static str>;
+}
+
+/// Trait for selecting influencers (contractable agents) in an [`FJContractModel`].
+pub trait AgentSelector {
+       fn generate_influencer_indicator_vec(&self, instance: &FJContractModel, influencer_count: usize) -> Vec<bool>;
+}
+
+impl FJContractModel {
+       /// Construct an instance of [`FJContractModel`] from a given [`FJNetwork`].
+       ///
+       /// The importance of the agents (for the reward) and their individual [`ActionSet`]s have to be provided.
+       ///
+       /// Fails if the lengths of the vectors and the size of the network are not matching.
+       pub fn try_new(network: FJNetwork, agent_importance: Vec<f32>, action_sets: Vec<ActionSet>) -> Result<Self, &'static str> {
+               if agent_importance.len() != network.size() || action_sets.len() != network.size() {
+                       Err("Could not create ContractNetwork because the input lengths of the given vectors were not matching.")
+               } else {
+                       let agent_importance = DVector::from_vec(agent_importance);
+                       Ok(Self {network, agent_importance, action_sets})
+               }
+       }
+       
+       /// Return the size of the underlying [`FJNetwork`].
+       pub fn network_size(&self) -> usize {
+               self.network.size()
+       }
+       
+       /// Return a [`Vec`] with the out-degrees of the underlying [`FJNetwork`].
+       pub fn network_outdegrees(&self) -> Vec<(usize, usize)> {
+               self.network.get_outdegrees()
+       }
+
+       /// Return the reward for a given [`DVector`] slice of public opinions.
+       pub fn reward(&self, public_opinions: &DVector<f32>) -> f32 {
+               (self.agent_importance.transpose() * public_opinions)[(0,0)]
+       }
+       
+       /// Return the total reward contribution of all nodes that are **not** influencers.
+       ///
+       /// Influencers are specified with a boolean slice of length n, where n is the number of nodes in the network.
+       /// If and only if agent `i` is an influencer, then the slice contains `True` on position `i`.
+       pub fn baseline_reward(&self, influencer_indicator_vec: &[bool]) -> f32 {
+               self.weighted_reward_influence_vec().iter()
+                       .enumerate()
+                       .filter(|(i, _wri)| !influencer_indicator_vec[*i])      // keep non-agent nodes
+                       .map(|(i, wri)| self.initial_opinion(i, &AgentAction::Random) * wri)
+                       .sum()
+       }
+       
+       /// Return the value of the intial opinion value of the `i`-th agent for a given [`AgentAction`].
+       /// If the action indicates randomization, then the expected initial opinion is returned.
+       pub fn initial_opinion(&self, i: usize, agentaction: &AgentAction) -> f32 {
+               match agentaction {
+                       AgentAction::Deterministic(j) => {
+                                self.action_sets[i].values[*j]
+                       },
+                       AgentAction::Random => {
+                               self.action_sets[i].expected_belief()
+                       }
+               }
+       }
+       
+       /// Return the vector of initial opinions for a given state.
+       pub fn initial_opinion_vec(&self, state: &Vec<&(AgentAction, f32)>) -> DVector<f32> {
+               DVector::from_iterator(self.network.size(), state.iter().enumerate().map(|(i, (agentaction, _))| self.initial_opinion(i, agentaction)))
+       }
+       
+       /// Return a vector where entry `i` describes the linear influence on the reward that the initial opinion of agent `i` has.
+       pub fn weighted_reward_influence_vec(&self) -> DVector<f32> {
+               (self.agent_importance.transpose() * self.network.get_final_mapping()).transpose()
+       }
+       
+       /// Return the utility for a given state.
+       pub fn utility(&self, state: &Vec<&(AgentAction, f32)>) -> f32 {
+               let alpha_sum = state.iter().map(|(_, alpha)| *alpha).sum::<f32>();
+               let reward = self.reward(&self.network.fj_final(&self.initial_opinion_vec(state)));
+               return (1.0 - alpha_sum) * reward;
+       }
+
+       /// Compute breakpoints (critical contracts) for a given agent `i` and a given [`ActionSet`].
+       /// 
+       /// A verbose breakpoint is a tuple composed of 
+       /// - An [`AgentAction`] that `i` chooses.
+       /// - A minimal linear contract (alpha value) that incentivizes this action.
+       /// - The value of the action, i.e., the additive contribution to the reward when it is chosen.
+       fn compute_verbose_breakpoints(&self, i: usize) -> Vec<(AgentAction, f32, f32)> {
+               let action_set = &self.action_sets[i];
+               let action_set_size = action_set.values.len();
+               let influence = self.weighted_reward_influence_vec()[i];
+               
+               // Vec for storing all computed breakpoints
+               let mut breakpoints: Vec<(AgentAction,f32,f32)> = Vec::with_capacity(action_set_size);
+
+               let max_reward_contrib = influence * action_set.values.iter().max_by(|a,b| a.total_cmp(&b)).unwrap();
+
+               // first breakpoint is at for alpha = 0.0
+               let mut cur_cost = 0.0;
+               let mut cur_reward_contrib = action_set.expected_belief()*influence;
+               breakpoints.push((AgentAction::Random, 0.0, cur_reward_contrib));
+       
+               let aux_actions = std::iter::zip(action_set.values.iter(), action_set.costs.iter())
+                       // obtain indices
+                       .enumerate()
+                       // compute reward contribution of the actions.
+                       .map(|(j,(opinion, cost))| (j, opinion*influence, *cost))
+                       // remove actions that are worse than randomizing
+                       .filter(|(_j, reward_contrib, cost)| reward_contrib - cost >= cur_reward_contrib)
+                       // collect to Vec
+                       .collect::<Vec<(usize,f32,f32)>>();
+               
+               while cur_reward_contrib <= max_reward_contrib {
+                       let intersection = aux_actions
+                               .iter()
+                               // remove actions that have a lower reward contribution 
+                               // they have already been considered for being on the upper envelope
+                               .filter(|(_j, reward_contrib, _cost)| *reward_contrib > cur_reward_contrib)
+                               // compute intersections (alpha value) with current action on the upper envelope
+                               .map(|(j, reward_contrib, cost)| (j, reward_contrib, cost, (cost - cur_cost) / (reward_contrib - cur_reward_contrib)))
+                               // remove action that have alpha > 1.
+                               .filter(|(_j, _reward_contrib, _cost, alpha)| *alpha <= 1.0)
+                               // take the action with minimum alpha
+                               .min_by(|a,b| (a.3).total_cmp(&(b.3)));
+                       
+                       match intersection {
+                               None => { 
+                                       break; 
+                               },
+                               Some((j, reward_contrib, cost, alpha)) => {
+                                       cur_cost = *cost;
+                                       cur_reward_contrib = *reward_contrib;
+                                       breakpoints.push((AgentAction::Deterministic(*j), alpha, cur_reward_contrib));
+                               }
+                       }
+               }
+               breakpoints
+       }
+       
+       /// Compute breakpoints (critical contracts) for a set of influencers.
+       ///
+       /// Influencers are specified with a boolean slice of length n, where n is the number of nodes in the network.
+       /// If and only if agent `i` is an influencer, then the slice contains `True` on position `i`.
+       ///
+       /// A breakpoint is a tuple composed of 
+       /// - An [`AgentAction`] that `i` chooses.
+       /// - A minimal linear contract (alpha value) that incentivizes this action.
+       ///
+       /// For each node in the network, breakpoints are stored as a [`Vec`].
+       /// Nodes that are not influencers obtain the Random action with alpha = 0.0 as single breakpoint.
+       /// 
+       /// Returns a [`Vec`] of length n, composed of the breakpoint [`Vec`]s of all agents.
+       pub fn get_breakpoints(&self, influencer_indicator_vec: &[bool]) -> Vec<Vec<(AgentAction, f32)>> {
+               if influencer_indicator_vec.len() != self.network.size() {
+                       panic!("The influencer_indicator_vec needs to be a boolean vector of length n.");
+               }
+               let all_breakpoints = (0..self.network.size())
+                       .map(|i| {
+                               if influencer_indicator_vec[i] {
+                                       self.compute_verbose_breakpoints(i).iter().map(|(action,alpha,_reward_contrib)| (*action,*alpha)).collect()
+                               } else {
+                                       vec![(AgentAction::Random, 0.0)]
+                               }
+                       }).collect();
+               return all_breakpoints;
+       }
+       
+       /// Compute verbose breakpoints for a list of influencers.
+       ///
+       /// The list should contain agent indices that are influencers.
+       ///
+       /// A verbose breakpoint is a tuple composed of 
+       /// - An index `i` of the corresponding agent/influencer.
+       /// - An [`AgentAction`] that `i` chooses.
+       /// - A minimal linear contract (alpha value) that incentivizes this action.
+       /// - The value of the action, i.e., the additive contribution to the reward when it is chosen.
+       /// 
+       /// For each influencer, a [`Vec`] of verbose breakpoints is computed.
+       /// Returns a [`Vec`] that contains a [`Vec`] with verbose breakpoints for each influencer.
+       pub fn get_verbose_breakpoints(&self, influencers_list: &[usize]) -> Vec<Vec<(usize, AgentAction, f32, f32)>> {
+               influencers_list.iter()
+                       .map(|i| (i, self.compute_verbose_breakpoints(*i)))
+                       .map(|(i, breakpoints_i)| breakpoints_i.iter()
+                               .map(|(action, alpha, reward_contrib)| (*i, *action, *alpha, *reward_contrib))
+                               .collect::<Vec<(usize, AgentAction, f32, f32)>>()
+                       ).collect::<Vec<Vec<(usize, AgentAction, f32, f32)>>>()
+       }
+}
+
+impl ActionSet {       
+       /// Returns the expected value of the distribution.
+       pub fn expected_belief(&self) -> f32 {
+               return (self.values.transpose() * &self.probabilities)[(0,0)];
+       }
+
+       /// Construct a new ActionSet.
+       ///
+       /// Input is a vector of triples (value, probability, cost) encoding the probability distribution.
+       ///
+       /// If the vector is empty or if the probabilities don't add up to 1.0, an error is returned.
+       pub fn try_new(value_probability_cost_triples: &Vec<(f32, f32, f32)>) -> Result<Self, &'static str> {
+               let size = value_probability_cost_triples.len();
+               if size > 0 {
+                       let probabilities: DVector<f32> = DVector::from_iterator(size, value_probability_cost_triples.iter().map(|(_,p,_)| *p));
+                       if (1.0_f32 - probabilities.iter().sum::<f32>()).abs() > 0.000001_f32 { 
+                               Err("The sum of probabilities is not (sufficiently close to) 1.0")
+                       } else {
+                               let values = DVector::from_iterator(size, value_probability_cost_triples.iter().map(|(v,_,_)| *v));
+                               let costs = DVector::from_iterator(size, value_probability_cost_triples.iter().map(|(_,_,c)| *c));
+                               Ok(Self{values, probabilities, costs})
+                       }
+               } else {
+                       Err("No elements are given.")
+               }
+       }
+}
+
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use nalgebra::DVector;
+
+    #[test]
+    fn belief_distribution_creation() {
+               let value_probability_cost_triples = vec![(0.5, 0.125, 0.3), (0.2, 0.75, 0.1), (0.8, 0.125, 0.6)];
+               let belief_distribution = ActionSet::try_new(&value_probability_cost_triples);
+               assert!(belief_distribution.is_ok());
+               let belief_distribution = belief_distribution.unwrap();
+               assert_eq!(belief_distribution.expected_belief(), 5./16.);
+    }
+    
+    #[test]
+    fn contract_instance_creation_1() {
+               let indecisiveness_vec = vec![0.5, 1.0, 0.75];
+               let adjacency_vec = vec![false, false, true, false, false, false, true, true, false];
+        let network = FJNetwork::try_new_uniform(&indecisiveness_vec, &adjacency_vec).unwrap();
+        let value_probability_cost_triples = vec![(0.5, 0.125, 0.3), (0.2, 0.75, 0.1), (0.8, 0.125, 0.6)];
+               let belief_distribution = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let agent_importance = vec![1., 1., 1.];
+        let instance = FJContractModel::try_new(network, agent_importance, vec![belief_distribution.clone(); 3]);
+        assert!(instance.is_ok());
+        let instance = instance.unwrap();
+        for (result, expected) in std::iter::zip(instance.weighted_reward_influence_vec().iter(), DVector::from_vec(vec![11./13., 22./13., 6./13.]).iter()) {
+                       assert!((result - expected).abs() < 1./(2.0_f32.powf(6.0)), "compare {result} with {expected}");
+               }
+    }
+
+       #[test]
+    fn breakpoint_computation() {
+               let indecisiveness_vec = vec![0.25, 0.5];
+               let influence_weight_vec = vec![0., 1., 1., 0.];
+        let network = FJNetwork::try_new(indecisiveness_vec, influence_weight_vec).unwrap(); 
+        
+        let value_probability_cost_triples = vec![(0., 0.9, 0.), (3./4., 0.1, 1./50.)];
+               let belief_distribution = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let agent_importance = vec![1., 1.];
+        let instance = FJContractModel::try_new(network, agent_importance, vec![belief_distribution.clone(); 2]).unwrap();
+        
+        let verbose_breakpoints_firstplayer = instance.compute_verbose_breakpoints(0);
+        let breakpoints_firstplayer_expected = vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 28./1215.)];
+        for (bp_result, bp_expected) in std::iter::zip(verbose_breakpoints_firstplayer.iter(), breakpoints_firstplayer_expected.iter()) {
+                       assert_eq!(bp_result.0, bp_expected.0);
+                       assert!((bp_result.1 - bp_expected.1).abs() <  1./(2.0_f32.powf(6.0)));
+               }        
+    }
+    
+    #[test]
+    fn breakpoint_computation_2() {
+               let indecisiveness_vec = vec![0.5, 0.25, 0.75];
+               let adjacency_vec = vec![false, true, false, true, false, true, false, true, false];
+        let network = FJNetwork::try_new_uniform(&indecisiveness_vec, &adjacency_vec).unwrap();
+        let value_probability_cost_triples = vec![(0., 0.99, 0.), (3./5., 0.01, 1./5.)];
+         
+               let belief_distribution = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let agent_importance = vec![1., 1., 1.];
+        let instance = FJContractModel::try_new(network, agent_importance, vec![belief_distribution.clone(); 3]);
+        assert!(instance.is_ok());
+        let instance = instance.unwrap();
+        
+               let breakpoints = instance.get_breakpoints(&[true, true, true]);
+        let breakpoints_expected = vec![
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 50./99.)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 50./297.)],
+                       vec![(AgentAction::Random, 0.0)],
+               ];
+               
+               for (result, expected) in std::iter::zip(breakpoints.iter(), breakpoints_expected.iter()) {
+                       assert_eq!(result.len(), expected.len());
+                       for (bp_res, bp_exp) in std::iter::zip(result.iter(), expected.iter()) {
+                               assert_eq!(bp_res.0, bp_exp.0);
+                               assert!((bp_res.1 - bp_exp.1).abs() < 1./(2.0_f32.powf(6.0)));
+                       }
+               }
+                               
+               let verbose_breakpoints = instance.get_verbose_breakpoints(&[0,1,2]);
+               let verbose_breakpoints_expected = vec![vec![(0,AgentAction::Random, 0.0, 2./500.),(0,AgentAction::Deterministic(1), 50./99., 2./5.)], vec![(1,AgentAction::Random, 0.0, 6./500.),(1,AgentAction::Deterministic(1), 50./297., 6./5.)], vec![(2,AgentAction::Random, 0.0, 1./500.)]];
+               
+               for (result, expected) in std::iter::zip(verbose_breakpoints.iter(), verbose_breakpoints_expected.iter()) {
+                       assert_eq!(result.len(), expected.len());
+                       for (bp_res, bp_exp) in std::iter::zip(result.iter(), expected.iter()) {
+                               assert_eq!(bp_res.0, bp_exp.0);
+                               assert_eq!(bp_res.1, bp_exp.1);
+                               assert!((bp_res.2 - bp_exp.2).abs() < 1./(2.0_f32.powf(6.0)));
+                               assert!((bp_res.3 - bp_exp.3).abs() < 1./(2.0_f32.powf(6.0)));
+                       }
+               }
+    }
+    
+    #[test]
+    fn breakpoint_computation_3() {
+               let indecisiveness_vec = vec![0.5, 0.25, 0.75];
+               let adjacency_vec = vec![false, true, false, true, false, true, false, true, false];
+        let network = FJNetwork::try_new_uniform(&indecisiveness_vec, &adjacency_vec).unwrap();
+        let value_probability_cost_triples = vec![(0., 0.98, 0.), (1./5., 0.01, 1./32.), (4./5., 0.01, 1./4.)];
+         
+               let belief_distribution = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let agent_importance = vec![1., 1., 1.];
+        let instance = FJContractModel::try_new(network, agent_importance, vec![belief_distribution.clone(); 3]);
+        assert!(instance.is_ok());
+        let instance = instance.unwrap();
+        
+        // weighted reward influence
+        let wri_vec = instance.weighted_reward_influence_vec();
+        for (wri_res, wri_expected) in std::iter::zip(wri_vec.iter(), vec![2./3., 2., 1./3.].iter()) {
+                       assert!((wri_res - wri_expected).abs() < 1./(2.0_f32.powf(6.0)));
+               }
+        
+               let breakpoints = instance.get_breakpoints(&[true, true, true]);
+        let breakpoints_expected = vec![
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 75./304.), (AgentAction::Deterministic(2), 35./64.)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 25./304.), (AgentAction::Deterministic(2), 35./192.)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 75./152.)],
+               ];
+               
+               // breakpoints
+               for (result, expected) in std::iter::zip(breakpoints.iter(), breakpoints_expected.iter()) {
+                       assert_eq!(result.len(), expected.len());
+                       for (bp_res, bp_exp) in std::iter::zip(result.iter(), expected.iter()) {
+                               println!("bp computed: ({:?},{:.4}). bbp expected: ({:?},{:.4})", bp_res.0, bp_res.1, bp_exp.0, bp_exp.1);
+                               assert_eq!(bp_res.0, bp_exp.0);
+                               assert!((bp_res.1 - bp_exp.1).abs() < 1./(2.0_f32.powf(6.0)));
+                       }
+               }
+    }
+}
+
diff --git a/src/friedkin_johnson.rs b/src/friedkin_johnson.rs
new file mode 100644 (file)
index 0000000..08764a2
--- /dev/null
@@ -0,0 +1,234 @@
+use nalgebra::{DMatrix,DVector};
+
+#[derive(Debug)]
+/// Implementation of Friedkin-Johnson opinion dynamics.
+/// The model goes back to *N. Friedkin and E. Johnsen. Social influence and opinions. J. Math. Soc., 15(3-4): 193–206, 1990.*
+pub struct FJNetwork {
+       // the number of nodes of the network.
+       size: usize,
+       
+       // stochastic quadratic matrix with "incoming" edge weights (row i, column j is the influence from j on i).
+       influence_matrix: DMatrix<f32>, 
+       
+       // diagonal matrix with susceptibility values.
+       lambda_matrix: DMatrix<f32>,
+       
+       // matrix that represents the mapping of initial opinions to public opinions in equilibrium.
+       final_mapping: DMatrix<f32>,
+}
+
+impl std::fmt::Display for FJNetwork {
+       /// Display the representational matrices for the FJ network.
+       /// This is the influence matrix (A) and the suceptibility matrix (Lambda).
+       fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "Influence:{} Lambda:{}", self.influence_matrix, self.lambda_matrix)
+    }
+}
+
+
+impl FJNetwork {
+       /// Construct a new instance.
+       ///
+       /// Expects the susceptibility values ("lambda") as a vector of sizes n.
+       /// The weights betweens the nodes have to be given as a vector of size n² that represents the matrix in column-major order.
+       /// Entry on row i, column j is the influence from j on i.
+       pub fn try_new(susceptibility_vec: Vec<f32>, influence_weight_vec: Vec<f32>) -> Result<Self, &'static str> {
+               let size = susceptibility_vec.len();
+               
+               if size == 0 {
+                       Err("Could not create FJNetwork: Vec for susceptibilities cannot be empty.")
+               } else if size.pow(2) != influence_weight_vec.len() {
+                       Err("Could not create FJNetwork: Vec for susceptibilities and Vec for influence weights had incompatible lengths.")
+               } else {
+                       let influence_matrix = DMatrix::from_vec(size, size, influence_weight_vec);
+                       // TODO: Check feasibility: Is the weights matrix stochastic?
+
+                       let susceptibilities = DVector::from_vec(susceptibility_vec);
+                       // TODO: Check feasibility: Are the values from [0,1]?
+
+                       Self::new(size, influence_matrix, susceptibilities)
+               }
+       }
+       
+       /// Create an instance from a adjacency matrix in vector notation, with uniform influence between the nodes.
+       ///
+       /// Expects the susceptibility values ("lambda") as a vector of size n, and 
+       /// a boolean vector of length n² that represents the adjacency matrix in column-major order.
+       /// The influence from i to j is 1 / deg(j).
+       /// Entry on row i, column j specifices whether there is an edge from i to j.
+       pub fn try_new_uniform(susceptibility_vec: &Vec<f32>, adjacency_vec: &Vec<bool>) -> Result<Self, &'static str> {
+               let size = susceptibility_vec.len();
+               
+               if size == 0 {
+                       Err("Could not create FJNetwork: Vec for susceptibilities cannot be empty.")
+               } else if size.pow(2) != adjacency_vec.len() {
+                       Err("Could not create FJNetwork: Vec for susceptibilities and Vec for adjacency matrix had incompatible lengths.")
+               } else {
+                       // Compute the number of incoming edges for each agent:
+                       // Take each column from the adjacency matrix and collects the number of "true"-values in it.
+                       let incoming_edges_vec = &adjacency_vec
+                               .chunks(size)
+                               .map(|chunk| chunk
+                                       .iter()
+                                       .map(|x| *x as usize)
+                                       .sum::<usize>()
+                               ).collect::<Vec<usize>>();
+                       
+                       
+                       // Generate uniform weights, which is 1 / incoming_edges for every non-zero entry (and 0 otherwise).
+                       let influence_matrix = DMatrix::from_iterator(size, size, adjacency_vec
+                               .iter()
+                               .enumerate()
+                               .map(|(i,x)| match x {
+                                       false => 0.0,
+                                       true => 1.0 / incoming_edges_vec[i / size] as f32
+                               })
+                       ).transpose();
+
+                       // Create the susceptibility vector
+                       // If a node as no incoming edges, it needs to be completely stubborn (susceptibility = 0.0).
+                       let susceptibilities = DVector::from_iterator(
+                               size, 
+                               std::iter::zip(
+                                       susceptibility_vec.iter(), 
+                                       incoming_edges_vec.iter()
+                               ).map(|(susceptibility, incoming_edges)| match incoming_edges {
+                                       0 => 0.,
+                                       _ => *susceptibility
+                               }));
+                       // TODO: Check feasibility: Are the values in susceptibility_vec from [0,1]?
+
+                       Self::new(size, influence_matrix, susceptibilities)
+               }
+       }
+       
+       /// Creates an instance without input check.
+       /// Expects a quadratic DMatrix for influence weights and a DVector for the susceptibilities.
+       /// Computes a matrix inverse, which is a costly operation!
+       /// Matrix inverse computation might fail. In this case, return Err.
+       fn new(size: usize, influence_matrix: DMatrix<f32>, susceptibilities: DVector<f32>) -> Result<Self, &'static str> {
+               let id: DMatrix<f32> = DMatrix::identity(size, size);
+               let lambda_matrix: DMatrix<f32> = DMatrix::from_diagonal(&susceptibilities);
+               let inverse_option = (&id - (&lambda_matrix * &influence_matrix)).try_inverse();
+
+               match inverse_option {
+                       Some(inverse) => {
+                               let final_mapping: DMatrix<f32> = inverse * (&id - &lambda_matrix);
+                               Ok(Self { size, influence_matrix, lambda_matrix, final_mapping })
+                       },
+                       None => Err("Could not create FJNetwork: Matrix inverse computation failed. Perhaps the given values for influence weights and susceptibilities are not representing a valid Friedkin-Johnson instance.")
+               }
+       }
+
+       /// Returns the number of nodes in the network.
+       pub fn size(&self) -> usize {
+               self.size
+       }
+       
+       /// Returns a vector that contains a pair (i, outdegree(i)) for each vertex with index i.
+       /// outdegree(i) is the number of (other) nodes on which i has a direct positive influence.
+       pub fn get_outdegrees(&self) -> Vec<(usize, usize)> {
+               (0..self.size).map(|j| (j, self.influence_matrix.column(j).iter().filter(|x| **x > 0.0).count())).collect::<Vec<(usize, usize)>>()
+       }
+               
+       /// Returns the linear mapping that translates initial opinions to public opinions in equilibrium.
+       pub fn get_final_mapping(&self) -> &DMatrix<f32> {
+               return &self.final_mapping;
+       }
+       
+       /// Returns a vector of public opinions after a single step of opinion formation.
+       /// Parameters are the intrinsic opinions and the current public opinions
+       pub fn fj_single_step(&self, initial_opinions: &DVector<f32>, public_opinions: &DVector<f32>) -> DVector<f32> {
+               let id: DMatrix<f32> = DMatrix::identity(self.size, self.size);
+               return (&id - &self.lambda_matrix) * initial_opinions + &self.lambda_matrix * &self.influence_matrix * public_opinions;
+       }
+       
+       /// Returns a vector of public opinions after a given number of steps in the Friedkin-Johnson model
+       /// Parameters are the intrinsic opinions and the current public opinions
+       pub fn fj_steps(&self, number_of_rounds: usize, initial_opinions: &DVector<f32>, public_opinions: &mut DVector<f32>) {
+               for _ in 0..number_of_rounds {
+                       *public_opinions = self.fj_single_step(initial_opinions, public_opinions);
+               }
+       }
+       
+       /// Returns a vector of final opinions for a given vector of intrinsic beliefs.
+       pub fn fj_final(&self, initial_opinions: &DVector<f32>) -> DVector<f32> {
+               &self.final_mapping * initial_opinions
+       }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn network_creation_from_matrix() {
+               let susceptibility_vec = vec![0., 1.];
+               let influence_weight_vec = vec![0., 1., 1., 0.];
+        let network = FJNetwork::try_new(susceptibility_vec, influence_weight_vec);
+        assert!(network.is_ok());
+        let network = network.unwrap();
+        assert_eq!(network.size(), 2);
+    }
+    #[test]
+    fn network_creation_from_adjacency_vec_1() {
+               let susceptibility_vec = vec![0.942, 0.284, 0.345, 0.234, 0.773];
+               let adjacency_vec = vec![false, false, true, true, false, false, false, false, true, true, true, false, false, false, false, true, true, false, false, true, false, true, false, true, false];
+        let network = FJNetwork::try_new_uniform(&susceptibility_vec, &adjacency_vec);
+        assert!(network.is_ok());
+        let network = network.unwrap();
+               assert_eq!(network.influence_matrix, DMatrix::from_vec(5,5, vec![0.,0.,1.,1./3.,0.,0.,0.,0.,1./3.,1./2.,1./2.,0.,0.,0.,0.,1./2.,1./2.,0.,0.,1./2.,0.,1./2.,0.,1./3.,0.]));
+    }
+    #[test]
+    fn network_creation_from_adjacency_vec_2() {
+               let susceptibility_vec = vec![0.5, 1.0, 0.75];
+               let adjacency_vec = vec![false, false, true, false, false, false, true, true, false];
+        let network = FJNetwork::try_new_uniform(&susceptibility_vec, &adjacency_vec);
+        assert!(network.is_ok());
+               let influence_matrix_expected = DMatrix::from_vec(3,3, vec![0.,0.,0.5,0.,0.,0.5,1.,0.,0.]);
+               let lambda_matrix_expected = DMatrix::from_vec(3,3, vec![0.5,0.,0.,0.,0.,0.,0.,0.,0.75]);
+               let final_mapping_expected = DMatrix::from_vec(3, 3, vec![8./13.,0.,3./13.,3./13.,1.,6./13.,2./13.,0.,4./13.]);
+               let network = network.unwrap();
+               assert_eq!(network.influence_matrix, influence_matrix_expected);
+        assert_eq!(network.lambda_matrix, lambda_matrix_expected);
+        assert_eq!(network.final_mapping, final_mapping_expected);
+    }
+    #[test]
+    fn network_creation_outdegrees() {
+               let susceptibility_vec = vec![0.75, 0.25, 0.1];
+               let influence_weight_vec = vec![0., 0.125, 0.25, 1., 0., 0.75, 0., 0.875, 0.];
+        let network = FJNetwork::try_new(susceptibility_vec, influence_weight_vec);
+        let outdegrees = vec![(0,2), (1,2), (2,1)];
+        assert_eq!(network.unwrap().get_outdegrees(), outdegrees);
+    }
+    #[test]
+    fn network_creation_finalopinion_1() {
+               let susceptibility_vec = vec![0., 0.25];
+               let influence_weight_vec = vec![0., 1., 0., 0.];
+        let network = FJNetwork::try_new(susceptibility_vec, influence_weight_vec).unwrap(); 
+        let final_mapping_expected = DMatrix::from_vec(2, 2, vec![1., 0.25, 0., 0.75]);
+        assert_eq!(*network.get_final_mapping(), final_mapping_expected);
+    }
+    #[test]
+    fn network_creation_finalopinion_2() {
+               let susceptibility_vec = vec![0.25, 0.5];
+               let influence_weight_vec = vec![0., 1., 1., 0.];
+        let network = FJNetwork::try_new(susceptibility_vec, influence_weight_vec).unwrap(); 
+        let final_mapping_expected = DMatrix::from_vec(2, 2, vec![6./7., 3./7., 1./7., 4./7.]);
+        for (result, expected) in std::iter::zip(network.get_final_mapping().iter(),  final_mapping_expected.iter()) {
+                       assert!((result - expected).abs() < 1./(2.0_f32.powf(6.0)));
+               }
+    }
+    #[test]
+    fn network_creation_finalopinion_3() {
+               let susceptibility_vec = vec![0.75, 0.25, 0.1];
+               let influence_weight_vec = vec![0., 0.125, 0.25, 1., 0., 0.75, 0., 0.875, 0.];
+        let network = FJNetwork::try_new(susceptibility_vec, influence_weight_vec); 
+        let intrinsics = DVector::from_vec(vec![1., 0., 1.]);
+        let final_opinion_expected = DMatrix::from_vec(3, 3, vec![1259./4895., 47./4895., 7./979., 576./979., 768./979., 72./979., 756./4895., 1008./4895., 900./979.]) * &intrinsics;
+               let network = network.unwrap();
+               for (result, expected) in std::iter::zip(network.fj_final(&intrinsics).iter(),  final_opinion_expected.iter()) {
+                       assert!((result - expected).abs() < 1./(2.0_f32.powf(6.0)));
+               }
+    }
+}
diff --git a/src/graphgenerator.rs b/src/graphgenerator.rs
new file mode 100644 (file)
index 0000000..4942608
--- /dev/null
@@ -0,0 +1,83 @@
+use rand;
+
+/// Random draw from a geom(p) distribution in O(1).
+fn geom(p: f32) -> usize {
+       match p {
+               0.0 => usize::MAX, // representing infinity
+               1.0 => 0,
+               _ => {
+                       let u = rand::random::<f32>();
+                       ((1.0-u).log(1.0-p) - 1.0).ceil() as usize
+               }
+       }
+}
+
+/// Create directed gilbert random graph G(n,p) with n nodes and independent edge probability of p for all edges.
+///
+/// Returns an boolean adjacency matrix as vector of length n² in column-major order.
+pub fn directed_gilbert_graph_without_loops(n: usize, p: f32) -> Vec<bool> {
+       let mut e : Vec<bool> = vec![false; n*n];
+       let mut k : usize = 0;
+       
+       while n > 1 {
+               let l = geom(p);
+               
+               // Stop if we would not be within bounds of e.
+               if l == usize::MAX || k >= n*n - 1 {
+                       break;
+               }
+               
+               // Skip l entries and the diagonal
+               k += l + 1 + ((l + k + 1) / (n + 1)) - (k / (n + 1));
+
+               // Write back to e if we're still within bounds of e.
+               if k < n*n - 1 {
+                       e[k] = true;
+               }
+       }
+       return e;
+}
+
+/// Creates a Barabási-Albert random graph from a given start graph.
+///
+/// Parameters are n_rand, which is the number of additional nodes, and param_nu, which is the number of edges that each additional node is guaranteed to have.
+///
+/// The start graph is to be given as a slice of pairs, where each pair is an endge in the original graph.
+/// Nodes in the original graph are integers (0,1,..., n_0 - 1).
+/// 
+/// If param_nu > n_0, an Err is returned, since such a graph cannot be generated.
+/// Otherwise, the graph is returned as a Vec of pairs, where each pair is an edge between the nodes.
+pub fn ba_edgelist(n_rand: usize, param_nu: usize, startedges: &[(usize,usize)]) -> Result<Vec<(usize, usize)>, &'static str>  {
+       let mut edges: Vec<(usize,usize)> = Vec::with_capacity(startedges.len() + n_rand * param_nu);
+       
+       let mut start_node_count = 0;
+       for e in startedges {
+               start_node_count = *[start_node_count, e.0 + 1, e.1 + 1].iter().max().unwrap();
+               edges.push(e.clone()); // expensive copy, but it should be fine for small start graph
+       }
+       
+       if param_nu <= start_node_count {
+               for i in 0..n_rand {
+                       for j in 0..param_nu {
+                               let mut newedge: (usize, usize) = (0, start_node_count + i);
+                               loop {
+                                       let max_edge_index = startedges.len() + param_nu * i;
+                                       let random_edge_index = rand::random_range(0..max_edge_index);
+                                       let e: [usize;2] = [edges[random_edge_index].0, edges[random_edge_index].1];
+                                       let random_node_index = rand::random_range(0..2);
+                                       newedge.0 = e[random_node_index];
+
+                                       if !&edges[startedges.len()+param_nu*i .. startedges.len() + param_nu*i+j].contains(&newedge) {
+                                               // This edge is actually new
+                                               break;
+                                       }
+                               }
+                               
+                               edges.push(newedge);
+                       }
+               }
+               Ok(edges)
+       } else {
+               Err("param_nu <= n_0 required, where n_0 is the number of nodes in the given starting graph.")
+       }
+}
diff --git a/src/graphloader.rs b/src/graphloader.rs
new file mode 100644 (file)
index 0000000..c8e5235
--- /dev/null
@@ -0,0 +1,107 @@
+use std::fs;
+
+/// Loads a graph from a given text file.
+///
+/// Expects the input file path as string slice, the separating character as a char, and the integer number of the first node.
+///
+/// The input file format must be as follows:
+/// - Each line that shall not be parsed must start with an `%`.
+/// - Otherwise, each line represents exactly one edge, represented by two integers separated by `separator`.
+///   Nodes are represented by integers, where `firstnode` is the smallest integer that is a node.
+/// - Typically, `separator` is either a comma or a white space, and `firstnode` is either 0 or 1.   
+///
+/// If the input file cannot be parsed, the function might panic!
+///
+/// If the input file cannot be read, then there will be an error message, and [`None`] is returned.
+///
+/// Otherwise, a pair `(n, E)` is returned, where `n` is the number of nodes and `E` is the [`Vec`] of edges. 
+/// Edges are pairs of different integers, i.e., an undirected graph. 
+/// Loop from the input graph are dismissed.
+
+pub fn try_from_file(input_filepath: &str, separator : char, firstnode: usize) -> Option<(usize, Vec<(usize, usize)>)> {
+       match fs::read_to_string(input_filepath) {
+               Ok(filestring) => {
+                       let mut n : usize = 0;
+                       let mut edges = Vec::new();
+                       
+                       for line in filestring.lines() {                // check the first char of the line
+                               let firstchar = line.chars().next();
+                               
+                               if firstchar != Some('%') && firstchar != None {
+                                       let i = line.find(separator).unwrap(); // position of first seperator in this line
+                                       let len = line.len();
+                                       let j = match &line[i+1..len].find(separator) {
+                                               Some(k) => *k + i + 1,
+                                               None => len
+                                       };
+                                       let x = &line[0..i];
+                                       let y = &line[i+1..j];
+                                       let u : usize = x.parse::<usize>().unwrap() - firstnode;
+                                       let v : usize = y.parse::<usize>().unwrap() - firstnode;
+                                       
+                                       if u != v { // Do not add loops!
+                                               let edge = (u, v);
+                                               edges.push(edge);
+                                               n = std::cmp::max(n, std::cmp::max(u, v)+1);
+                                       }
+                               }
+                       }
+                       Some((n, edges))
+               }
+               Err(e) => {
+                       eprintln!("Could not read file '{input_filepath}': {e:?}");
+                       None
+               }
+       }
+}
+
+/// Converts a graph is edgelist representation to a graph in adjacency vector representation.
+/// 
+/// Expects the size `n` (number of nodes) and the list of edges as parameters.
+///
+/// Returns a symmetric adjacency matrix of the given graph.
+///
+/// # Example
+/// ```
+/// use fj_contracting::graphloader::edgelist_to_adjacency_vec;
+/// let edgelist = vec![(0,2), (0,3), (1,3), (1,4), (3,4)];
+/// let n = 5;
+/// let adjacency_vec = edgelist_to_adjacency_vec(n, &edgelist);
+/// assert_eq!(adjacency_vec, vec![
+///            false, false, true, true, false, 
+///            false, false, false, true, true, 
+///            true, false, false, false, false, 
+///            true, true, false, false, true, 
+///            false, true, false, true, false
+///    ]);
+/// ```
+               
+pub fn edgelist_to_adjacency_vec(n: usize, edgevec: &[(usize, usize)]) -> Vec<bool> {
+       let mut e : Vec<bool> = vec![false; n*n];
+       
+       for (u,v) in edgevec {
+               e[u*n + v] = true;
+               e[v*n + u] = true;
+       }
+       return e;
+}
+
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn conversion_edgelist_to_adjacency_vec() {
+               let edgelist = vec![(0,2), (0,3), (1,3), (1,4), (3,4)];
+               let n = 5;
+               let adjacency_vec = edgelist_to_adjacency_vec(n, &edgelist);
+               assert_eq!(adjacency_vec, vec![
+                       false, false, true, true, false, 
+                       false, false, false, true, true, 
+                       true, false, false, false, false, 
+                       true, true, false, false, true, 
+                       false, true, false, true, false
+               ]);
+    }
+}
diff --git a/src/instancegenerator.rs b/src/instancegenerator.rs
new file mode 100644 (file)
index 0000000..b539880
--- /dev/null
@@ -0,0 +1,250 @@
+use crate::friedkin_johnson::FJNetwork;
+use crate::fj_contract::*;
+use nalgebra::DVector;
+use rand::seq::SliceRandom;
+use rand::prelude::*;
+
+#[derive(Debug)]
+/// Methods for selecting certain agents as influencers.
+pub enum SimpleAgentSelector {
+       /// Selecting a subset of agents uniformly at random.
+       Random,
+       /// Selecting a subset of agents by their (out) degree.
+       HighestOutDegree,
+       /// Selecting a subset of agents by their linear influence of the reward.
+       MostInfluential,
+}
+impl std::str::FromStr for SimpleAgentSelector {
+       type Err = &'static str;
+       fn from_str(input: &str) -> Result<SimpleAgentSelector, Self::Err> {
+               match input {
+                       "Random"                        => Ok(SimpleAgentSelector::Random),
+                       "HighestOutDegree"      => Ok(SimpleAgentSelector::HighestOutDegree),
+                       "MostInfluential"       => Ok(SimpleAgentSelector::MostInfluential),
+                       _                                       => Err("Unknown RewardType.")
+               }
+       }
+}
+
+impl AgentSelector for SimpleAgentSelector {
+       /// Obtain influencers for a given instance and a given number of influencers with simple selecting methods.
+       ///
+       /// Return type is a boolean [`Vec`] of length `n`, where entry `i` is `true` if and only if `i` is an influencer.
+       /// `n` is the number of agents / nodes in the given network.
+       fn generate_influencer_indicator_vec(&self, instance: &FJContractModel, influencer_count: usize) -> Vec<bool> {
+               let network_size = instance.network_size();
+               match &self {
+                       SimpleAgentSelector::Random => {
+                               // all but "influencer_count" many entries are "false"
+                               let mut influencer_indicator_vec = vec![false; network_size - influencer_count];
+                               // extend the vector by "influencer_count" entries that are "true"
+                               influencer_indicator_vec.extend(vec![true; influencer_count]);
+                               // shuffle the vector
+                               influencer_indicator_vec.shuffle(&mut rand::rng());
+                               influencer_indicator_vec
+                       },
+                       SimpleAgentSelector::HighestOutDegree => {
+                               // initialize the vector with "false".
+                               let mut influencer_indicator_vec = vec![false; network_size];
+                               
+                               // get (node id, outdegree) pairs from network and sort them by outdegree
+                               let mut outdegrees: Vec<(usize,usize)> = instance.network_outdegrees();
+                               outdegrees.sort_by(|(_,degree1), (_,degree2)| degree2.cmp(degree1));
+                               
+                               // choose nodes with highest outdegrees
+                               for agent in outdegrees.iter().map(|(node_id,_)| *node_id).take(influencer_count) {
+                                       influencer_indicator_vec[agent] = true;
+                               }
+                               influencer_indicator_vec
+                       },
+                       SimpleAgentSelector::MostInfluential => {
+                               // initialize the vector with "false".
+                               let mut influencer_indicator_vec = vec![false; network_size];
+                               
+                               // get DVector containing the weighted reward influence for each node
+                               let weighted_reward_influence_dvector = instance.weighted_reward_influence_vec();
+                               
+                               // produce (node id, influence) tuples and sort by influence
+                               let mut weighted_reward_influence: Vec<(usize,&f32)> = weighted_reward_influence_dvector.into_iter().enumerate().collect();
+                               weighted_reward_influence.sort_by(|(_,influence1), (_,influence2)| influence2.total_cmp(influence1));
+                               
+                               // choose nodes with highest outdegrees
+                               for agent in weighted_reward_influence.iter().map(|(node_id,_)| *node_id).take(influencer_count) {
+                                       influencer_indicator_vec[agent] = true;
+                               }
+                               influencer_indicator_vec
+                       }
+               }
+       }
+}
+
+#[derive(Debug)]
+/// Types of functions that incorporate the importance (weight) of agents on the reward.
+pub enum RewardType {
+       Unweighted
+}
+impl std::str::FromStr for RewardType {
+       type Err = &'static str;
+       fn from_str(input: &str) -> Result<RewardType, Self::Err> {
+               match input {
+                       "Unweighted"    => Ok(RewardType::Unweighted),
+                       _                               => Err("Unknown RewardType.")
+               }
+       }
+}
+
+
+#[derive(Debug)]
+/// Methods of generating random action sets.
+pub enum SimpleActionSetGenerator {
+       DiscreteUniform(usize),                 // parameter: support size
+       Geometric(usize),                               // parameter: support size
+       GeometricFirstvalueZero(usize), // parameter: support size
+       BinaryBeta(f32),                                // parameter: beta
+       BinaryP0(f32),                                  // parameter: p0
+       BinaryP0Proportional,
+}
+
+impl ActionSetGenerator for SimpleActionSetGenerator {
+       fn try_generate(&self, network_size: f32) -> Result<ActionSet, &'static str> {
+               match &self {
+                       SimpleActionSetGenerator::DiscreteUniform(action_set_size)                      => SimpleActionSetGenerator::try_generate_random_discrete_uniform(*action_set_size),
+                       SimpleActionSetGenerator::Geometric(action_set_size)                                    => SimpleActionSetGenerator::try_generate_random_geometric(*action_set_size),
+                       SimpleActionSetGenerator::GeometricFirstvalueZero(action_set_size)      => SimpleActionSetGenerator::try_generate_random_geometric_firstvalue0(*action_set_size),
+                       SimpleActionSetGenerator::BinaryBeta(beta)                                              => SimpleActionSetGenerator::try_generate_random_binary_beta(*beta),
+                       SimpleActionSetGenerator::BinaryP0(p0)                                                  => SimpleActionSetGenerator::try_generate_random_binary_p0(*p0),
+                       SimpleActionSetGenerator::BinaryP0Proportional                                  => SimpleActionSetGenerator::try_generate_random_binary_p0(1. - 1. / network_size),
+               }
+       }
+}
+
+impl SimpleActionSetGenerator {
+               pub fn from_str(input_str: &str, action_set_size: Option<usize>, beta: Option<f32>, p0: Option<f32>) -> Option<SimpleActionSetGenerator> {
+               match input_str {
+                       "DiscreteUniform"                       => Some(SimpleActionSetGenerator::DiscreteUniform(action_set_size.expect("Parameter action_set_size needed"))),
+                       "Geometric"                                     => Some(SimpleActionSetGenerator::Geometric(action_set_size.expect("Parameter action_set_size needed"))),
+                       "GeometricFirstvalueZero"       => Some(SimpleActionSetGenerator::GeometricFirstvalueZero(action_set_size.expect("Parameter action_set_size needed"))),
+                       "BinaryBeta"                            => Some(SimpleActionSetGenerator::BinaryBeta(beta.expect("Parameter beta needed"))),
+                       "BinaryP0"                                      => Some(SimpleActionSetGenerator::BinaryP0(p0.expect("Parameter p0 needed"))),
+                       "BinaryP0Proportional"          => Some(SimpleActionSetGenerator::BinaryP0Proportional),
+                       _ => None
+               }
+       }
+
+       /// Generate a binaryBeta action set.
+       ///
+       /// - First action has value 0, cost 0, and random probability p0 ~ unif([0.5, 1)).
+       /// - Second action as random value s1 ~ unif[0,1), cost beta * s1 for given parameter beta, and probability 1 - p0.
+       fn try_generate_random_binary_beta(beta: f32) -> Result<ActionSet, &'static str> {
+               let mut rng = rand::rng();
+               //let p0 = rng.random::<f32>();                         // unif [0,1)
+               let p0 = rng.random::<f32>() * 0.5 + 0.5;       // unif [0.5,1)
+               let s1 = rng.random::<f32>();
+               let vpc_triples = vec![(0.0, p0, 0.0), (s1, 1.0 - p0, beta * s1)];
+               
+               ActionSet::try_new(&vpc_triples)
+       }
+       
+       /// Generate a binaryP0 action set.
+       ///
+       /// - First action has value 0, cost 0, and given probability p0.
+       /// - Second action as random value (unif[0,1)), random cost (unif[0,1) * unif[0,1)), and probability 1 - p0.
+       fn try_generate_random_binary_p0(p0: f32) -> Result<ActionSet, &'static str> {
+               let mut rng = rand::rng();
+               let beta = rng.random::<f32>();
+               let s1 = rng.random::<f32>();
+               let vpc_triples = vec![(0.0, p0, 0.0), (s1, 1.0 - p0, beta * s1)];
+               
+               ActionSet::try_new(&vpc_triples)
+       }
+
+       /// Generate a discrete uniform distribution over uniform-[0,1) values and costs.
+       ///
+       /// Precisely:
+       /// - Values are drawn independently from a continous uniform distribution with support [0,1).
+       /// - Costs are drawn independently from a continous uniform distribution with support [0,1).
+       /// - Probabilities are 1 / action_set_size.
+       fn try_generate_random_discrete_uniform(action_set_size: usize) -> Result<ActionSet, &'static str> {
+               if action_set_size == 0 {
+                       Err("Generation of an action set of size 0 is impossbile.")
+               } else {
+                       let mut rng = rand::rng();
+
+                       let values = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let costs = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let probabilities = DVector::from_vec(vec![1.0/action_set_size as f32; action_set_size]);
+
+                       Ok(ActionSet{values, probabilities, costs})
+               }
+       }
+       /// Generate a geometric distribution over uniform-[0,1) values and costs.
+       ///
+       /// Precisely:
+       /// - Values are drawn independently from a continous uniform distribution with support [0,1).
+       /// - Costs are drawn independently from a continous uniform distribution with support [0,1).
+       /// - Probabilities are 1/2, 1/4, ...
+       fn try_generate_random_geometric(action_set_size: usize) -> Result<ActionSet, &'static str> {
+               if action_set_size == 0 {
+                       Err("Generation of an action set of size 0 is impossbile.")
+               } else {
+                       let mut rng = rand::rng();
+
+                       let values = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let costs = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let mut probabilities = DVector::from_iterator(action_set_size, (0..action_set_size).map(|s| (2.0 as f32).powi(-(1 + s as i32))));
+                       
+                       // make the last two entries identical, since probabilities have to add up to exactly 1.
+                       probabilities[action_set_size-1] = probabilities[action_set_size-2];
+
+                       Ok(ActionSet{values, probabilities, costs})
+               }
+       }
+
+       /// Generate a geometric distribution over uniform-[0,1) values and costs, with the first value / cost being 0.0
+       ///
+       /// Precisely:
+       /// - First value is 0.0, all other values are drawn independently from a continous uniform distribution with support [0,1).
+       /// - First cost is 0.0, all other costs are drawn independently from a continous uniform distribution with support [0,1).
+       /// - Probabilities are 1/2, 1/4, ...
+       fn try_generate_random_geometric_firstvalue0(action_set_size: usize) -> Result<ActionSet, &'static str> {
+               if action_set_size <= 1 {
+                       Err("Generation of an action set of size 0 or size 1 is impossbile.")
+               } else {
+                       let mut rng = rand::rng();
+
+                       let mut values = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let mut costs = DVector::from_iterator(action_set_size, (0..action_set_size).map(|_| rng.random::<f32>()));
+                       let mut probabilities = DVector::from_iterator(action_set_size, (0..action_set_size).map(|s| (2.0 as f32).powi(-(1 + s as i32))));
+                       
+                       // make the last two entries identical, since probabilities have to add up to exactly 1.
+                       probabilities[action_set_size-1] = probabilities[action_set_size-2];
+                       
+                       // make the first value and first cost 0.0
+                       values[0] = 0.0;
+                       costs[0] = 0.0;
+
+                       Ok(ActionSet{values, probabilities, costs})
+               }
+       }
+}
+
+/// Construct a random instance from a given network.
+///
+/// A [`RewardType`] and a type implementing the trait [`ActionSetGenerator`] have to be provided.
+pub fn random_instance_from_network(network: FJNetwork, reward: &RewardType, action_set_generator: &impl ActionSetGenerator) -> FJContractModel {
+       let belief_action_set_vec = (0..network.size())
+                       .map(|_| action_set_generator.try_generate(network.size() as f32).unwrap())
+                       .collect();
+       let reward_vec = match reward {
+               RewardType::Unweighted => vec![1.; network.size()]
+       };
+       return FJContractModel::try_new(network, reward_vec, belief_action_set_vec).unwrap();
+}
+
+/// Construct a network with random susceptibility from a graph.
+///
+/// The size n of the graph and its adjacency matrix in a boolean [`Vec`] of length n² have to be provided.
+pub fn random_network_from_graph(size: usize, adjacency_vec: &Vec<bool>) -> FJNetwork {
+       let susceptibility_vec = rand::random_iter().take(size).collect();
+       return FJNetwork::try_new_uniform(&susceptibility_vec, adjacency_vec).unwrap();
+}
diff --git a/src/instancesimulator.rs b/src/instancesimulator.rs
new file mode 100644 (file)
index 0000000..4ddddc1
--- /dev/null
@@ -0,0 +1,253 @@
+use crate::instancegenerator;
+use crate::statistics::{BoxPlot,Histogram};
+use crate::instancesolver::{InstanceWrapper,SolverResult,SimpleSolver,HeuristicSolver};
+use std::time::Instant;
+use std::fs::File;
+use std::fs::OpenOptions;
+use std::io::Write;
+use std::collections::HashMap;
+use rayon::prelude::*;
+
+#[derive(Debug)]
+/// The results of different algorithms for solving a single instance of the FJ contracting problem.
+struct SingleSimulationResult<'a> {
+       opt_result: SolverResult,
+       uncontracted_result: SolverResult,
+       heuristic_results: HashMap<&'a HeuristicSolver,SolverResult>,
+}
+impl<'a> SingleSimulationResult<'a> {
+       /// Perform a single simulation on a graph.
+       ///
+       /// Create a random FJ contract instance from a given graph with the given parameters.
+       /// Solves the instance using the given set of heuristics.
+       pub fn obtain(network_size: usize, 
+                       adjacency_vec: &Vec<bool>, 
+                       reward_type: &instancegenerator::RewardType, 
+                       generator_type: &instancegenerator::SimpleActionSetGenerator, 
+                       influencer_selector: &instancegenerator::SimpleAgentSelector, 
+                       influencer_count: usize, 
+                       heuristics: &'a [HeuristicSolver]) -> Self {
+               
+               let network = instancegenerator::random_network_from_graph(network_size, adjacency_vec);
+               let instance = instancegenerator::random_instance_from_network(network, reward_type, generator_type);
+               let instancewrapper = InstanceWrapper::new(instance, influencer_selector, influencer_count);
+       
+               let opt_result = SolverResult::obtain(&instancewrapper, &SimpleSolver::Optimal, None);
+               let uncontracted_result = SolverResult::obtain(&instancewrapper, &SimpleSolver::Uncontracted, Some(opt_result.utility));
+               let heuristic_results = HashMap::from_iter(heuristics.iter().map(|heuristic| (heuristic, SolverResult::obtain(&instancewrapper, heuristic, Some(opt_result.utility)))));
+       
+               Self { opt_result, uncontracted_result, heuristic_results }
+       }
+}
+
+#[derive(Debug)]
+/// A handler for writing box plot data in a certain format to text files.
+///
+/// There are two files, one for the box plot and one separate file for all outliers.
+struct BoxPlotHandler { stats: File, outliers: File }
+impl BoxPlotHandler {
+       /// Allocate text files with a meaningful names for writing the data.
+       ///
+       /// The meaningful name is specified by an identifier string slice.
+       pub fn init(path_base: &str, identifier: &str) -> Self {
+               let stat_filepath = format!("{path_base}_{identifier}.stats.dat");
+               let outlier_filepath = format!("{path_base}_{identifier}.outliers.dat");
+               let mut stat_file = OpenOptions::new().write(true).create(true).truncate(true).open(stat_filepath).expect("Could not create stats file.");
+               writeln!(stat_file, "network_size influencer_count min max median lower_q upper_q lower_w upper_w").expect("Error while writing headlines to stats file.");
+               let mut outlier_file = OpenOptions::new().write(true).create(true).truncate(true).open(outlier_filepath).expect("Could not create outlier file.");
+               writeln!(outlier_file, "network_size influencer_count outlier_value").expect("Error while writing headlines to outliers file.");
+               Self { stats: stat_file, outliers: outlier_file }
+       }
+       /// Write box plot data to files.
+       pub fn write_stats(&mut self, network_size: usize, influencer_count: &impl std::fmt::Display, boxplot: BoxPlot<f32>, ) {
+               writeln!(self.stats, "{network_size} {influencer_count} {boxplot}").expect("Error while writing boxplot stats to file.");
+               for outlier in boxplot.outliers() {
+                       writeln!(self.outliers, "{network_size} {influencer_count} {outlier}").expect("Error while writing boxplot outliers to file.");
+               }
+       }
+}
+
+#[derive(Debug)]
+/// A handler for writing histogram data in a certain format to a text file.
+struct HistogramHandler(File);
+impl HistogramHandler {
+       /// Allocate a file with a meaningful name for writing the data.
+       ///
+       /// The meaningful name is specified by an identifier string slice.
+       pub fn init(path_base: &str, identifier: &str) -> Self {
+               let hist_filepath = format!("{path_base}_{identifier}.hist");
+               let mut hist_file = OpenOptions::new().write(true).create(true).truncate(true).open(hist_filepath).expect("Could not create hist file.");
+               writeln!(hist_file, "contracted_agents percentage").expect("Error while writing headlines to stats file.");
+               Self(hist_file)
+       }
+       /// Write histogram data to file.
+       pub fn write_contracted_agents(&mut self, influencer_count: &impl std::fmt::Display, repetitions: usize, histogram: Histogram<usize>) {
+               for contract_number in 0..histogram.threshold()+1 {
+                       writeln!(self.0, "{contract_number}/{influencer_count} {}", histogram.distribution()[contract_number] as f32 / repetitions as f32 * 100.0).expect("Error while writing histogram to file.");
+               }
+       }
+}
+
+#[derive(Debug)]
+/// Repeated simulation of instances randomly generated from a fixed graph, with a varying number of influencers.
+pub struct Simulator {
+       network_size: usize,
+       adjacency_vec: Vec<bool>,
+       reward_type: instancegenerator::RewardType, 
+       generator_type: instancegenerator::SimpleActionSetGenerator, 
+       influencer_selector: instancegenerator::SimpleAgentSelector,
+       heuristic_solvers: Vec<HeuristicSolver>,
+       heuristic_solver_files: Vec<(BoxPlotHandler, BoxPlotHandler, HistogramHandler)>,        // utility, approx, contracted_agents
+       opt_files: (BoxPlotHandler, HistogramHandler),                                                                          // utility, contracted_agents
+       uncontracted_files: (BoxPlotHandler, BoxPlotHandler),                                                           // utility, approx
+}
+impl Simulator {
+       /// Create a simulator for a fixed graph using the specified generator parameters.
+       /// Prepare the output files for statistics about the results obtained with specified solvers.
+       pub fn init(
+               network_size: usize,
+               adjacency_vec: Vec<bool>,
+               reward_type: instancegenerator::RewardType, 
+               generator_type: instancegenerator::SimpleActionSetGenerator, 
+               influencer_selector: instancegenerator::SimpleAgentSelector,
+               heuristic_solvers: Vec<HeuristicSolver>,
+               output_directory: &str,
+               filename_base: &str,
+       ) -> Self {
+       
+               let path_base = format!("{output_directory}/{filename_base}_support={:?}_reward={:?}_agents={:?}", &generator_type, &reward_type, &influencer_selector);
+               
+               let heuristic_solver_files: Vec<_> = heuristic_solvers
+                       .iter()
+                       .map(|heuristic| (
+                                       BoxPlotHandler::init(&path_base, &format!("{heuristic:?}_util")), 
+                                       BoxPlotHandler::init(&path_base, &format!("{heuristic:?}_approx")),
+                                       HistogramHandler::init(&path_base, &format!("{heuristic:?}_agents"))
+                               )
+                       )
+                       .collect();
+               let opt_files = (
+                       BoxPlotHandler::init(&path_base, "OPT_util"), 
+                       HistogramHandler::init(&path_base, "OPT_agents")
+               );
+               let uncontracted_files = (
+                       BoxPlotHandler::init(&path_base, "Uncontracted_util"), 
+                       BoxPlotHandler::init(&path_base, "Uncontracted_approx"), 
+               );
+               
+               Self { network_size, adjacency_vec, reward_type, generator_type, influencer_selector, heuristic_solvers, heuristic_solver_files, opt_files, uncontracted_files }
+       }
+       
+       /// Run the simulator iteratively for a varying number of influencers.
+       /// For each number of influencers, `repeat` many instances are created and solved in parallel.
+       ///
+       /// Statistical evaluation subject to the influencer number is performed afterwards.
+       pub fn run_parallel(&mut self, min_influencer_count: usize, max_influencer_count: usize, repeat: usize) {
+               println!("Simulation for support={:?}, reward={:?}, agents={:?} with agent counts from {} to {}, and solving heuristics {:?}",  &self.generator_type, &self.reward_type, &self.influencer_selector, min_influencer_count, max_influencer_count, &self.heuristic_solvers);
+               for influencer_count in min_influencer_count..max_influencer_count+1 {
+                       println!(" > Simulate {} instances with {influencer_count} agents in parallel.", repeat);
+                       let stime = Instant::now();
+                       let sim_result_collection: Vec<SingleSimulationResult> = (0..repeat)
+                               .into_par_iter()        // Parallel
+                               //.into_iter()          // Iterative
+                               .map(|_| SingleSimulationResult::obtain(self.network_size, &self.adjacency_vec, &self.reward_type, &self.generator_type, &self.influencer_selector, influencer_count, &self.heuristic_solvers))
+                               .collect();
+                       
+                       println!(" > ... Finished after {:.2?}.", stime.elapsed());
+       
+                       let opt_results: Vec<_> = sim_result_collection.iter().map(|result| result.opt_result).collect();
+                       self.opt_files.0.write_stats(self.network_size, &influencer_count, SolverResult::utility_boxplot(&opt_results));
+                       self.opt_files.1.write_contracted_agents(&influencer_count, repeat, SolverResult::contractcount_histogram(&opt_results, influencer_count));
+                       
+                       let uncontracted_results: Vec<_> = sim_result_collection.iter().map(|result| result.uncontracted_result).collect();
+                       self.uncontracted_files.0.write_stats(self.network_size, &influencer_count, SolverResult::utility_boxplot(&uncontracted_results));
+                       self.uncontracted_files.1.write_stats(self.network_size, &influencer_count, SolverResult::utility_approx_factor_boxplot(&uncontracted_results));
+                       
+                       for (solver,(util_file,approx_file,contractcount_file)) in std::iter::zip(&self.heuristic_solvers,&mut self.heuristic_solver_files) {
+                               let result: Vec<_> = sim_result_collection.iter().map(|result| *result.heuristic_results.get(&solver).unwrap()).collect();
+                               util_file.write_stats(self.network_size, &influencer_count, SolverResult::utility_boxplot(&result));
+                               approx_file.write_stats(self.network_size, &influencer_count, SolverResult::utility_approx_factor_boxplot(&result));
+                               contractcount_file.write_contracted_agents(&influencer_count, repeat, SolverResult::contractcount_histogram(&result, influencer_count));        
+                       }
+               }               
+       }
+}
+
+#[derive(Debug)]
+/// Repeated simulation of instances randomly generated from multiple graphs, with a fixed number of influencers.
+pub struct MultiSimulator {
+       reward_type: instancegenerator::RewardType, 
+       generator_type: instancegenerator::SimpleActionSetGenerator, 
+       influencer_selector: instancegenerator::SimpleAgentSelector,
+       heuristic_solvers: Vec<HeuristicSolver>,
+       heuristic_solver_files: Vec<(BoxPlotHandler, BoxPlotHandler, HistogramHandler)>,        // utility, approx, contracted_agents
+       opt_files: (BoxPlotHandler, HistogramHandler),                                                                          // utility, contracted_agents
+       uncontracted_files: (BoxPlotHandler, BoxPlotHandler),                                                           // utility, approx
+}
+impl MultiSimulator {
+       /// Create a simulator independently from a graph, using the specified generator parameters.
+       /// Prepare the output files for statistics about the results obtained with specified solvers.
+       pub fn init(
+               reward_type: instancegenerator::RewardType, 
+               generator_type: instancegenerator::SimpleActionSetGenerator, 
+               influencer_selector: instancegenerator::SimpleAgentSelector,
+               heuristic_solvers: Vec<HeuristicSolver>,
+               output_directory: &str,
+               filename_base: &str,
+       ) -> Self {
+       
+               let path_base = format!("{output_directory}/{filename_base}_support={:?}_reward={:?}_agents={:?}", &generator_type, &reward_type, &influencer_selector);
+               
+               let heuristic_solver_files: Vec<_> = heuristic_solvers
+                       .iter()
+                       .map(|heuristic| (
+                                       BoxPlotHandler::init(&path_base, &format!("{heuristic:?}_util")), 
+                                       BoxPlotHandler::init(&path_base, &format!("{heuristic:?}_approx")),
+                                       HistogramHandler::init(&path_base, &format!("{heuristic:?}_agents"))
+                               )
+                       )
+                       .collect();
+               let opt_files = (
+                       BoxPlotHandler::init(&path_base, "OPT_util"), 
+                       HistogramHandler::init(&path_base, "OPT_agents")
+               );
+               let uncontracted_files = (
+                       BoxPlotHandler::init(&path_base, "Uncontracted_util"), 
+                       BoxPlotHandler::init(&path_base, "Uncontracted_approx"), 
+               );
+               
+               Self { reward_type, generator_type, influencer_selector, heuristic_solvers, heuristic_solver_files, opt_files, uncontracted_files }
+       }
+       
+       
+       /// Run the simulator for a given set of graphs of identical size in parallel, for a fixed number of influencers.
+       pub fn run_parallel_on_alternative_graphs(&mut self, network_size: usize, influencer_count: usize, adjacency_vec_slice: &[Vec<bool>]) {
+               let repetitions = adjacency_vec_slice.len();
+               
+               println!("For each graph, generate instances with {influencer_count} agents and perform simulation in parallel ...");
+               
+               let stime = Instant::now();
+               let sim_result_collection: Vec<SingleSimulationResult> = adjacency_vec_slice
+                       .into_par_iter()        // Parallel
+                       //.into_iter()          // Iterative
+                       .map(|adjacency_vec| SingleSimulationResult::obtain(network_size, adjacency_vec, &self.reward_type, &self.generator_type, &self.influencer_selector, influencer_count, &self.heuristic_solvers))
+                       .collect();
+               
+               println!("... finished after {:.2?}.", stime.elapsed());
+
+               let opt_results: Vec<_> = sim_result_collection.iter().map(|result| result.opt_result).collect();
+               self.opt_files.0.write_stats(network_size, &influencer_count, SolverResult::utility_boxplot(&opt_results));
+               self.opt_files.1.write_contracted_agents(&influencer_count, repetitions, SolverResult::contractcount_histogram(&opt_results, influencer_count));
+               
+               let uncontracted_results: Vec<_> = sim_result_collection.iter().map(|result| result.uncontracted_result).collect();
+               self.uncontracted_files.0.write_stats(network_size, &influencer_count, SolverResult::utility_boxplot(&uncontracted_results));
+               self.uncontracted_files.1.write_stats(network_size, &influencer_count, SolverResult::utility_approx_factor_boxplot(&uncontracted_results));
+               
+               for (solver,(util_file,approx_file,contractcount_file)) in std::iter::zip(&self.heuristic_solvers, &mut self.heuristic_solver_files) {
+                       let result: Vec<_> = sim_result_collection.iter().map(|result| *result.heuristic_results.get(&solver).unwrap()).collect();
+                       util_file.write_stats(network_size, &influencer_count, SolverResult::utility_boxplot(&result));
+                       approx_file.write_stats(network_size, &influencer_count, SolverResult::utility_approx_factor_boxplot(&result));
+                       contractcount_file.write_contracted_agents(&influencer_count, repetitions, SolverResult::contractcount_histogram(&result, influencer_count));   
+               }               
+       }
+}
diff --git a/src/instancesolver.rs b/src/instancesolver.rs
new file mode 100644 (file)
index 0000000..6e39935
--- /dev/null
@@ -0,0 +1,457 @@
+use crate::fj_contract::*;
+use crate::statistics::{BoxPlot,Histogram};
+use itertools::Itertools;
+use nalgebra::DVector;
+
+#[derive(Debug)]
+/// Wrapper for [`FJContractModel`] for a fixed set of influencers.
+pub struct InstanceWrapper {
+       pub instance: FJContractModel,
+       pub influencer_indicator_vec: Vec<bool>,
+       pub influencer_list: Vec<usize>,
+       pub verbose_breakpoints: Vec<Vec<(usize,AgentAction,f32,f32)>>,
+       pub weighted_reward_influence_vec: DVector<f32>,
+       pub baseline_reward: f32,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
+/// Solvers for two extremal cases: Optimal contract or no contract at all. 
+pub enum SimpleSolver {
+       /// The optimal contract for an instance.
+       Optimal,
+       /// The uncontracted case where no influencer receives a contract.
+       Uncontracted,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
+/// Heuristical solvers that compute approximations.
+pub enum HeuristicSolver {
+       /// The best contract where only a single agent (influencer) receives a contract.
+       BestSingleAgentContract,
+       /// A contracting computed using the greedy algorithm for the multiple choice knapsack problem. This solution can be arbitrarily bad.
+       GreedyMCKPOnlyKnapsack,
+       /// The GreedyMCKP algorithms compares the knapsack solution with the best single agent contract and returns the better.
+       GreedyMCKP,
+       /// This improved heuristic uses the sorting from MCKP, but makes replacements only when it is increasing the utility.
+       GreedyImprovementsMCKPSorting,
+}
+
+#[derive(Clone, Copy, Debug)]
+/// The result (utility and approximation ratio) for a single solution (contract).
+pub struct SolverResult {
+       pub utility: f32,
+       pub utility_approx_factor: f32,
+       pub contract_count: usize,
+       pub alpha_sum: f32,
+       pub reward_contribution: f32,
+}
+
+pub trait InstanceSolver {
+       /// Solve the given instance of the FJ contracting problem.
+       fn obtain_state(&self, instancewrapper: &InstanceWrapper) -> Vec<(usize, AgentAction, f32, f32)>;
+}
+
+impl SolverResult {
+       /// Compute the utility and approximation ratio for the (wrapped) instance of, using a given solver.
+       pub fn obtain(instancewrapper: &InstanceWrapper, solver: &impl InstanceSolver, optimal_utility: Option<f32>) -> SolverResult {
+               // The approximation factor has to be rounded due to numeric instability.
+               let state = solver.obtain_state(instancewrapper);
+               let utility = instancewrapper.get_utility_from_verbose_state_2(&state);
+               let utility_approx_factor = (1000000.0 * optimal_utility.unwrap_or(utility) / utility).round() / 1000000.0;
+               let contract_count = Self::contract_count(&state);
+               let alpha_sum : f32 = state.iter().map(|(_i,_action,alpha,_rewardcontribution)| alpha).sum();
+               let reward_contribution : f32 = state.iter().map(|(_i,_action,_alpha,rewardcontribution)| rewardcontribution).sum();
+               Self { utility, utility_approx_factor, contract_count, alpha_sum, reward_contribution }
+       }
+       
+       /// Obtain a [`BoxPlot`] of the utility for a collection of results.
+       pub fn utility_boxplot(resultcollection: &[SolverResult]) -> BoxPlot<f32> {
+               BoxPlot::from_nonempty_f32_history_vec(resultcollection.iter().map(|result| result.utility).collect())
+       }
+       
+       /// Obtain a [`BoxPlot`] of the approximation ratio for a collection of results.
+       pub fn utility_approx_factor_boxplot(resultcollection: &[SolverResult]) -> BoxPlot<f32> {
+               BoxPlot::from_nonempty_f32_history_vec(resultcollection.iter().map(|result| result.utility_approx_factor).collect())
+       }
+       
+       /// Obtain a [`Histogram`] of the number of infleuncers that received a contract.
+       pub fn contractcount_histogram(resultcollection: &[SolverResult], size: usize) -> Histogram<usize> {
+               Histogram::from_nonempty_usize_history_vec(resultcollection.iter().map(|result| result.contract_count).collect(), size)
+       }
+       
+       fn contract_count(state: &Vec<(usize, AgentAction, f32, f32)>) -> usize {
+               state.iter().filter(|state| state.1 != AgentAction::Random).count()
+       }
+}
+
+impl InstanceWrapper {
+       /// Create a wrapper for an instance of [`FJContractModel`] and select influencers.
+       pub fn new(instance: FJContractModel, influencer_selector: &impl AgentSelector, influencer_count: usize) -> Self {
+               let influencer_indicator_vec = influencer_selector.generate_influencer_indicator_vec(&instance, influencer_count);
+               let influencer_list = influencer_indicator_vec.iter().enumerate().filter(|(_i,is_agent)| **is_agent).map(|(i,_isagent)| i).collect::<Vec<usize>>();
+               let verbose_breakpoints = instance.get_verbose_breakpoints(&influencer_list);
+               let weighted_reward_influence_vec = instance.weighted_reward_influence_vec();
+               let baseline_reward = instance.baseline_reward(&influencer_indicator_vec);
+               Self { instance, influencer_indicator_vec, influencer_list, verbose_breakpoints, weighted_reward_influence_vec, baseline_reward }
+       }
+       
+       /// The number of breakpoints / critical contracts that each influencer has.
+       pub fn get_breakpoint_count_vec(&self) -> Vec<usize> {
+               self.verbose_breakpoints.iter().map(|bpvec_for_node_i| bpvec_for_node_i.len()).collect()
+       }
+       
+       /// The total number of brakpoints / critical contracts that exist.
+       pub fn get_breakpoint_sum_for_influencers(&self) -> usize {
+               self.verbose_breakpoints.iter().map(|verbose_bp_vec| verbose_bp_vec.len()).sum::<usize>()
+       }
+
+       /// Compute the utility from a (borrowed) verbose state, in which all contracts are borrowed.
+       fn get_utility_from_verbose_state(&self, verbose_state: &[&(usize, AgentAction, f32, f32)]) -> f32 {
+               let alpha_sum : f32 = verbose_state.iter().map(|(_i,_action,alpha,_rewardcontribution)| alpha).sum();
+               let reward_sum : f32 = verbose_state.iter().map(|(_i,_action,_alpha,rewardcontribution)| rewardcontribution).sum();
+               return (1. - alpha_sum) * ( self.baseline_reward + reward_sum);
+       }
+       
+       /// Compute the utility from a (borrowed) verbose state, in which contracts are directly contained.
+       fn get_utility_from_verbose_state_2(&self, verbose_state: &[(usize, AgentAction, f32, f32)]) -> f32 {
+               let alpha_sum : f32 = verbose_state.iter().map(|(_i,_action,alpha,_rewardcontribution)| alpha).sum();
+               let reward_sum : f32 = verbose_state.iter().map(|(_i,_action,_alpha,rewardcontribution)| rewardcontribution).sum();
+               return (1. - alpha_sum) * ( self.baseline_reward + reward_sum);
+       }
+       
+       /// Flatten the breakpoint collection such that they are in a single (unnested) [`Vec`].
+       fn get_verbose_breakpoints_flattened_positives(&self) -> Vec<(usize, AgentAction, f32, f32)> {
+               self.verbose_breakpoints.iter()
+                       .map(
+                               |bpvec| bpvec.iter()
+                                       .filter(|(_i,_action,alpha,_rewardcontrib)| *alpha > 0.)
+                       ).flatten()
+                       .cloned()
+                       .collect::<Vec<_>>()
+       }
+}
+
+impl InstanceSolver for SimpleSolver {
+       /// Solve the given instance of the FJ contracting problem, using a simple solver.
+       fn obtain_state(&self, instancewrapper: &InstanceWrapper) -> Vec<(usize, AgentAction, f32, f32)> {
+               match &self {
+                       SimpleSolver::Optimal => instancewrapper.verbose_breakpoints
+                               .iter()
+                               .multi_cartesian_product()
+                               .max_by(|bp_agentlist_x, bp_agentlist_y| instancewrapper.get_utility_from_verbose_state(bp_agentlist_x).total_cmp(&instancewrapper.get_utility_from_verbose_state(bp_agentlist_y)))
+                       .unwrap()
+                       .into_iter()
+                       .cloned()
+                       .collect(),
+                       
+                       SimpleSolver::Uncontracted => instancewrapper.influencer_list
+                               .iter()
+                               .map(|i| 
+                                       (*i, AgentAction::Random, 0.0, instancewrapper.instance.initial_opinion(*i, &AgentAction::Random) * instancewrapper.weighted_reward_influence_vec[*i])
+                               ).collect::<Vec<_>>()
+               }
+       }
+}
+
+impl InstanceSolver for HeuristicSolver {
+       /// Solve the given instance of the FJ contracting problem, using a heuristical solver.
+       fn obtain_state(&self, instancewrapper: &InstanceWrapper) -> Vec<(usize, AgentAction, f32, f32)> {
+               match &self {
+                       HeuristicSolver::GreedyImprovementsMCKPSorting => {
+                               let verbose_breakpoints_positives_sorted = HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 1.0);
+                               HeuristicSolver::greedy_local_search_verbose(&instancewrapper, &verbose_breakpoints_positives_sorted)
+                       },
+                       HeuristicSolver::GreedyMCKPOnlyKnapsack => {
+                               let verbose_breakpoints_positives_sorted = HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 0.5);
+                               HeuristicSolver::knapsack_verbose(&instancewrapper, &verbose_breakpoints_positives_sorted, 0.5)
+                       },
+                       HeuristicSolver::BestSingleAgentContract => {
+                               let breakpoints_flattened_positives = instancewrapper.get_verbose_breakpoints_flattened_positives();
+                               HeuristicSolver::at_most_one_contract_verbose(&instancewrapper, &breakpoints_flattened_positives)
+                       },
+                       HeuristicSolver::GreedyMCKP => {
+                               let verbose_breakpoints_positives_sorted = HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 0.5);
+                               let knapsack_state = HeuristicSolver::knapsack_verbose(&instancewrapper, &verbose_breakpoints_positives_sorted, 0.5);
+                               let knapsack_utility = instancewrapper.get_utility_from_verbose_state_2(&knapsack_state);
+                               
+                               let breakpoints_flattened_positives = instancewrapper.get_verbose_breakpoints_flattened_positives();
+                               let fallback_state = HeuristicSolver::at_most_one_contract_verbose(&instancewrapper, &breakpoints_flattened_positives);
+                               let fallback_utility = instancewrapper.get_utility_from_verbose_state_2(&fallback_state);
+                               
+                               if knapsack_utility > fallback_utility {
+                                       return knapsack_state;
+                               } else {
+                                       return fallback_state;
+                               }
+                       },
+               }
+       }
+}
+impl HeuristicSolver {
+       fn greedy_local_search_verbose(instancewrapper: &InstanceWrapper, verbose_breakpointlist_positives_sorted: &[(usize, AgentAction, f32, f32)]) -> Vec<(usize, AgentAction, f32, f32)> {
+               let mut state = SimpleSolver::Uncontracted.obtain_state(&instancewrapper);
+               let mut intermediate_state : Vec<(usize, AgentAction, f32, f32)>;
+               
+               for breakpoint in verbose_breakpointlist_positives_sorted {
+                       let i = breakpoint.0;
+                       intermediate_state = vec![*breakpoint];
+                       intermediate_state.extend(state.iter().filter(|(j, _, _, _)| *j != i));
+                       if instancewrapper.get_utility_from_verbose_state_2(&state) < instancewrapper.get_utility_from_verbose_state_2(&intermediate_state) {
+                               state = intermediate_state;
+                       }
+               }
+               return state;
+       }
+       
+       fn knapsack_verbose(instancewrapper: &InstanceWrapper, verbose_breakpointlist_positives_sorted: &[(usize, AgentAction, f32, f32)], max_alpha_sum: f32) -> Vec<(usize, AgentAction, f32, f32)> {
+               let mut state = SimpleSolver::Uncontracted.obtain_state(&instancewrapper);
+               let mut alpha_sum = 0.0;
+               for breakpoint in verbose_breakpointlist_positives_sorted {
+                       let i = breakpoint.0;
+                       let alpha = breakpoint.2;
+                       let current_bp_position = state.iter().position(|(j,_,_,_)| *j == i).unwrap();
+                       if alpha_sum - state[current_bp_position].2 + alpha > max_alpha_sum {
+                               break;
+                       }
+                       state[current_bp_position] = *breakpoint;
+                       alpha_sum += alpha;
+               }
+               return state;
+       }
+       
+       fn at_most_one_contract_verbose(instancewrapper: &InstanceWrapper, verbose_breakpointlist_positives: &[(usize, AgentAction, f32, f32)]) -> Vec<(usize, AgentAction, f32, f32)> {
+               
+               let mut state = SimpleSolver::Uncontracted.obtain_state(&instancewrapper);
+               let rewardcontrib_uncontracted_nodes = instancewrapper.baseline_reward;
+               
+               let maximum_breakpoint = verbose_breakpointlist_positives.iter().max_by(
+                       |(_i1,_action1,alpha1,rewardcontrib1), (_i2,_action2,alpha2,rewardcontrib2)| ((1.0 - alpha1) * (rewardcontrib_uncontracted_nodes + rewardcontrib1)).total_cmp(&((1.0 - alpha2) * (rewardcontrib_uncontracted_nodes + rewardcontrib2)))
+               );
+               
+               if let Some((i,action,alpha,rewardcontrib)) = maximum_breakpoint {
+                       if (1.0 - alpha) * (rewardcontrib_uncontracted_nodes + rewardcontrib) > rewardcontrib_uncontracted_nodes {
+                               let switch_position = state.iter().position(|(j,_,_,_)| *j == *i).unwrap();
+                               state[switch_position] = (*i,*action,*alpha,*rewardcontrib);
+                       }
+               }
+
+               return state;
+       }
+
+       fn get_greedy_mckp_ordering_verbose(instancewrapper: &InstanceWrapper, max_alpha_sum: f32) -> Vec<(usize, AgentAction, f32, f32)> {
+               let mut verbose_breakpoints_with_marginal_slopes: Vec<(usize, AgentAction, f32, f32, f32)> = Vec::new();
+               for breakpointlist_i in &instancewrapper.verbose_breakpoints {
+                       let mut dominated: Vec<(usize, AgentAction, f32, f32)> = Vec::new();
+                       
+                       // find dominated breakpoints
+                       for bp_x in breakpointlist_i.iter().filter(|(_i,_action,alpha,_rewardcontrib)| *alpha > 0.) {
+                               for bp_y in breakpointlist_i.iter().filter(|bp| *bp != bp_x) {
+                                       let value_x = bp_x.3;
+                                       let value_y = bp_y.3;
+                                       let alpha_x = bp_x.2;
+                                       let alpha_y = bp_y.2;
+                                       
+                                       if value_x > 0. && value_x >= value_y && value_x / alpha_x >= value_y / alpha_y {
+                                               dominated.push(*bp_y);
+                                       } else {
+                                               for bp_z in breakpointlist_i.iter().filter(|bp| *bp != bp_x && *bp != bp_y)  {
+                                                       let value_z = bp_z.3;
+                                                       let alpha_z = bp_z.2;
+                                                       
+                                                       if      value_x <= value_y &&
+                                                               value_y <= value_z &&
+                                                               alpha_x <= alpha_y &&
+                                                               alpha_y <= alpha_z && (
+                                                                       value_x == value_y ||
+                                                                       alpha_y == alpha_z ||
+                                                                       (value_y - value_x) / (alpha_y - alpha_x) <= (value_z - value_y) / (alpha_z - alpha_y)
+                                                               ) {
+                                                               dominated.push(*bp_y);
+                                                       }
+                                               }
+                                       }
+                               }
+                       }
+                       // extract remaining breakpoints, i.e., only breakpoints that are 
+                       // 1. not dominated by another breakpoint, and
+                       // 2. have an alpha that is not higher than the maximum total alpha (since they cannot be taken anyway)
+                       let mut remaining_breakpoints = breakpointlist_i.clone()
+                               .extract_if(.., |x| !dominated.contains(&x))
+                               .filter(|(_i,_action,alpha,_rewardcontrib)| *alpha <= max_alpha_sum)
+                               .collect::<Vec<_>>();
+                       
+                       // sort remaining breakpoints by increasing alpha
+                       remaining_breakpoints.sort_by(
+                               |(_i1,_action1,alpha1,_rewardcontrib1), (_i2,_action2,alpha2,_rewardcontrib2)| alpha1.total_cmp(alpha2)
+                       );
+                       
+                       // Add marginal slopes (reward difference / alpha difference with previous breakpoint) for all breakpoints except for the first one.
+                       let mut verbose_breakpointlist_i_with_marginal_slopes = remaining_breakpoints
+                               .iter()
+                               .enumerate()
+                               .filter(|(_j,(_i,_action,alpha,_rewardcontrib))| *alpha > 0.)
+                               .map(|(j,(i,action,alpha,rewardcontrib))| 
+                                       if j > 0 {
+                                               let prev_alpha = remaining_breakpoints[j-1].2;
+                                               let prev_rewardcontrib = remaining_breakpoints[j-1].3;
+                                               let marginal_slope = (rewardcontrib - prev_rewardcontrib) / (alpha - prev_alpha);
+                                               (*i, *action, *alpha, *rewardcontrib, marginal_slope) 
+                                       } else { 
+                                               (*i, *action, *alpha, *rewardcontrib, 0.0) 
+                                       })
+                               .collect::<Vec<(usize, AgentAction, f32, f32, f32)>>();
+
+                       // append to the indexed collection of all breakpoints
+                       verbose_breakpoints_with_marginal_slopes.append(&mut verbose_breakpointlist_i_with_marginal_slopes)
+               }
+               
+               // sort by decreasing marginal slopes
+               verbose_breakpoints_with_marginal_slopes.sort_by(
+                       |(_i1,_action1,_alpha1,_rewardcontrib1,marginalslope1), (_i2,_action2,_alpha2,_rewardcontrib2,marginalslope2)|
+                               marginalslope2.total_cmp(marginalslope1)
+               );
+
+               // return vector (with margin slopes removed from breakpoint tuples)
+               verbose_breakpoints_with_marginal_slopes.iter().map(|(i,action,alpha,rewardcontrib,_marginalslope)| (*i,*action,*alpha,*rewardcontrib)).collect::<Vec<_>>()
+       }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use crate::friedkin_johnson::FJNetwork;
+    use crate::instancegenerator;
+    
+    fn get_testing_instance() -> FJContractModel {
+               let indecisiveness_vec = vec![0.5, 0.25, 0.75];
+               let adjacency_vec = vec![false, true, false, true, false, true, false, true, false];
+        let network = FJNetwork::try_new_uniform(&indecisiveness_vec, &adjacency_vec).unwrap();
+        let value_probability_cost_triples = vec![(0., 0.98, 0.), (1./5., 0.01, 1./32.), (4./5., 0.01, 1./4.)];
+         
+               let action_set = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let reward_weights = vec![1., 1., 1.];
+        let instance = FJContractModel::try_new(network, reward_weights, vec![action_set.clone(); 3]);
+        assert!(instance.is_ok());
+        
+        return instance.unwrap();
+       }
+    
+    
+    fn get_testing_instance_2() -> FJContractModel {
+               let indecisiveness_vec = vec![0.5, 0.25, 0.75];
+               let adjacency_vec = vec![false, true, false, true, false, true, false, true, false];
+        let network = FJNetwork::try_new_uniform(&indecisiveness_vec, &adjacency_vec).unwrap();
+        let value_probability_cost_triples = vec![(0.1, 0.94, 0.1), (0.3, 0.044, 0.12), (0.8, 0.016, 0.6)];
+
+               let action_set = ActionSet::try_new(&value_probability_cost_triples).unwrap();
+        let reward_weights = vec![1., 1., 1.];
+        let instance = FJContractModel::try_new(network, reward_weights, vec![action_set.clone(); 3]);
+        assert!(instance.is_ok());
+        
+        return instance.unwrap();
+       }
+       
+       #[test]
+       fn instancetest_2() {
+                // select all nodes as influencers
+        let instance = get_testing_instance_2();
+        let instancewrapper = InstanceWrapper::new(instance, &instancegenerator::SimpleAgentSelector::Random, 3);
+        assert_eq!(instancewrapper.influencer_indicator_vec, [true, true, true], "check influencer indicator vec when selecting everyone.");
+        
+        let breakpoints_expected = vec![
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 1.0)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 1./3.), (AgentAction::Deterministic(2), 0.48)],
+                       vec![(AgentAction::Random, 0.0)],
+               ];
+               
+               let breakpoints = instancewrapper.instance.get_breakpoints(&[true, true, true]);
+                               
+               for (result, expected) in std::iter::zip(breakpoints.iter(), breakpoints_expected.iter()) {
+                       assert_eq!(result.len(), expected.len());
+                       for (bp_res, bp_exp) in std::iter::zip(result.iter(), expected.iter()) {
+                               assert_eq!(bp_res.0, bp_exp.0);
+                               assert!((bp_res.1 - bp_exp.1).abs() < 1./(2.0_f32.powf(6.0)));
+                       }
+               }
+               
+               let expected_opt_actions = vec![(0, AgentAction::Random), (1, AgentAction::Deterministic(2)), (2,AgentAction::Random)];
+               
+               let opt_state_verbose = SimpleSolver::Optimal.obtain_state(&instancewrapper);
+               // ... check OPT actions
+               let mut opt_actions = opt_state_verbose.iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               opt_actions.sort_by_key(|(index,_action)| *index);
+               assert_eq!(opt_actions, expected_opt_actions, "check OPT actions");
+               
+               let opt_utility = instancewrapper.get_utility_from_verbose_state_2(&opt_state_verbose);
+               assert_eq!(opt_utility, 0.8944);
+       }
+    
+    #[test]
+    fn instancesolver_test() {
+
+        // select only one infleuncer. Most influention should be the second node.
+        let instance = get_testing_instance();
+        let instancewrapper = InstanceWrapper::new(instance, &instancegenerator::SimpleAgentSelector::MostInfluential, 1);
+        assert_eq!(instancewrapper.influencer_indicator_vec, [false, true, false], "check influencer_indicator_vec when selecting the single most influential agent as influencer");
+        
+        // select all nodes as influencers
+        let instance = get_testing_instance();
+        let instancewrapper = InstanceWrapper::new(instance, &instancegenerator::SimpleAgentSelector::Random, 3);
+        assert_eq!(instancewrapper.influencer_indicator_vec, [true, true, true], "check influencer_indicator_vec when selecting everyone");
+        
+        // keep this instance of all nodes are influencers, and ...
+        // ...check breakpoint counts
+        assert_eq!(instancewrapper.get_breakpoint_count_vec(), [3,3,2], "check breakpoint counts");
+        assert_eq!(instancewrapper.get_breakpoint_sum_for_influencers(), 8, "check breakpoint sums");
+        
+        // ... denote expected breakpoints
+               let breakpoints_expected = vec![
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 75./304.), (AgentAction::Deterministic(2), 35./64.)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 25./304.), (AgentAction::Deterministic(2), 35./192.)],
+                       vec![(AgentAction::Random, 0.0), (AgentAction::Deterministic(1), 75./152.)],
+               ];
+               
+               // ... check breakpoints
+               for breakpointlist in &instancewrapper.verbose_breakpoints {
+                       for (i,action,_alpha,_rewardcontribution) in breakpointlist {
+                               assert!(breakpoints_expected[*i].iter().map(|(exp_action, _exp_alpha)| exp_action).collect::<Vec<_>>().contains(&&action));
+                       }
+               }
+               
+               // ... check sorting without cropping expensive items (MCKP) [alphas need to be removed again]
+               let sorted_mckp_bpcollection_positives = HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 1.0).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               
+               let expected_mckp_sorting = vec![(1, AgentAction::Deterministic(2)), (0, AgentAction::Deterministic(2)), (2, AgentAction::Deterministic(1))];
+               assert_eq!(sorted_mckp_bpcollection_positives, expected_mckp_sorting, "check greedy MCKP sorting without cropping expensive items");
+               
+               // ... check sorting when cropping items with alpha > 0.5 (MCKP) [alphas need to be removed again]
+               let sorted_mckp_bpcollection_positives = HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 0.5).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               
+               let expected_mckp_sorting = vec![(1, AgentAction::Deterministic(2)), (2, AgentAction::Deterministic(1))];
+               assert_eq!(sorted_mckp_bpcollection_positives, expected_mckp_sorting, "check greedy MCKP sorting when cropping items with alpha > 0.5");
+               
+               // For this instance, all heuristics should equal OPT
+               let expected_actions = vec![(0, AgentAction::Random), (1, AgentAction::Deterministic(2)), (2,AgentAction::Random)];
+               
+               // ... check OPT actions
+               let mut opt_actions = SimpleSolver::Optimal.obtain_state(&instancewrapper).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               opt_actions.sort_by_key(|(index,_action)| *index);
+               assert_eq!(opt_actions, expected_actions, "check OPT actions");
+               
+               // ... check GreedyMCKP actions (this is the full result of the algorithm, which is the best of knapsack packing and singletons
+               let mut greedy_mckp_actions = HeuristicSolver::GreedyMCKP.obtain_state(&instancewrapper).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               greedy_mckp_actions.sort_by_key(|(index,_action)| *index);
+               assert_eq!(greedy_mckp_actions, expected_actions, "check GreedyMCKP actions");
+               
+               
+               // ... check MCKP knapsack action (this is the pure output of the knapsack packing algorithm, without checking singletons)
+               let mut mckp_knapsack_actions = HeuristicSolver::knapsack_verbose(&instancewrapper, &HeuristicSolver::get_greedy_mckp_ordering_verbose(&instancewrapper, 0.5), 0.5).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               mckp_knapsack_actions.sort_by_key(|(index,_action)| *index);
+               assert_eq!(mckp_knapsack_actions, expected_actions, "check MCKP knapsack output");
+               
+               // ... check best single contract action
+               let mut best_single_contract = HeuristicSolver::BestSingleAgentContract.obtain_state(&instancewrapper).iter().map(|(index, action, _alpha, _rewardcontrib)| (*index,*action)).collect::<Vec<_>>();
+               best_single_contract.sort_by_key(|(index,_action)| *index);
+               assert_eq!(best_single_contract, expected_actions, "check best single contract actions");
+    }
+}
diff --git a/src/lib.rs b/src/lib.rs
new file mode 100644 (file)
index 0000000..a93198f
--- /dev/null
@@ -0,0 +1,19 @@
+/// Implementation of the Friedkin Johnson model for opinion dynamics.
+pub mod friedkin_johnson;
+/// Implementation of the FJ contracting model.
+pub mod fj_contract;
+
+/// Helper functions for loading and converting graphs.
+pub mod graphloader;
+/// Simple random graph generators, mainly for testing purposes.
+pub mod graphgenerator;
+/// Generators of random instances of the FJ contracting problem.
+pub mod instancegenerator;
+/// Solving instances of the FJ contracting problem using different heuristics.
+pub mod instancesolver;
+/// Wrapper for repeatedly generating and solving instances of the FJ contracting problem with statistical evaluation.
+pub mod instancesimulator;
+
+/// Statistical evaluation.
+pub mod statistics;
+
diff --git a/src/statistics.rs b/src/statistics.rs
new file mode 100644 (file)
index 0000000..7e28d8a
--- /dev/null
@@ -0,0 +1,199 @@
+/// Store statistical information as a boxplot.
+pub struct BoxPlot<T> {
+       min: T,
+       max: T,
+       median: T,
+       lower_quartile: T,
+       upper_quartile: T,
+       lower_whisker: T,
+       upper_whisker: T,
+       outliers: Vec<T>,
+}
+
+/// Store statistical information as a histogram.
+pub struct Histogram<T> {
+       min: T,
+       max: T,
+       total: T,
+       distribution: Vec<usize>,
+       outlier_count: usize,
+       threshold: usize,
+}
+
+impl<T: std::fmt::Display + std::fmt::Debug> std::fmt::Display for BoxPlot<T> {
+       /// Obtain a single-line that contains the boxplot information.
+       /// Useful for writing the results of a statistical experiment to a file.
+       fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "{} {} {} {} {} {} {}", self.min, self.max, self.median, self.lower_quartile, self.upper_quartile, self.lower_whisker, self.upper_whisker)
+    }
+}
+
+impl<T: std::fmt::Debug> std::fmt::Debug for BoxPlot<T> {
+       /// Obtain verbose information about a boxplot (for debugging purposes).
+       fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "min: {:?}, max: {:?}, median: {:?}, lower_quartile: {:?}, upper_quartile: {:?}, lower_whisker: {:?}, upper_whisker: {:?}, outliers: {:?}", self.min, self.max, self.median, self.lower_quartile, self.upper_quartile, self.lower_whisker, self.upper_whisker, self.outliers)
+    }
+}
+
+impl<T: std::fmt::Display + std::fmt::Debug> std::fmt::Display for Histogram<T> {
+       /// Write a single line that contains the minimum, the maximum, and the total sum.
+       fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "min: {:?}, max: {:?}, total: {:?}, outlier_count: {}", self.min, self.max, self.total, self.outlier_count)
+    }
+}
+
+impl<T: std::fmt::Debug> std::fmt::Debug for Histogram<T> {
+       /// Write a single line that contains the minimum, the maximum, the total sum and the distributional information.
+       fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "min: {:?}, max: {:?}, total {:?}, outlier_count: {}, distribution {:?}", self.min, self.max, self.total, self.outlier_count, self.distribution)
+    }
+}
+
+impl BoxPlot<f32> {
+       /// Create a [`BoxPlot`] from a history of (simulation) results.
+       ///
+       /// The history is given as a [`Vec`] of floating point numbers.
+       /// It computes minimum, maximum, median, first (lower) and third (upper) quartiles and whiskers.
+       /// The upper whisker contains all values that have a distance of at most 1.5 IQR to the third quartile.
+       /// The lower whisker contains all values that have a distance of at most 1.5 IQR to the first quartile.
+       ///
+       /// Panics if the vector is empty.
+       ///
+       /// # Example
+       /// ``` 
+       /// use fj_contracting::statistics::BoxPlot;
+       ///
+       /// // Specify a history.
+       /// let history = vec![14., 12., 9., 15., 7., 4., 28., 3., 28.];
+       ///
+       /// // Obtain the boxplot from this history.
+       /// let bplot = BoxPlot::from_nonempty_f32_history_vec(history);
+       ///
+       /// let bplot_output = format!("{bplot:?}");
+       /// let expected_output = "min: 3.0, max: 28.0, median: 12.0, lower_quartile: 7.0, upper_quartile: 15.0, lower_whisker: 3.0, upper_whisker: 15.0, outliers: [28.0, 28.0]";
+       /// assert_eq!(bplot_output, expected_output);
+       /// assert_eq!(*bplot.outliers(), vec![28., 28.]);
+       /// ```
+       pub fn from_nonempty_f32_history_vec(mut vec: Vec<f32>) -> BoxPlot<f32> {
+               vec.sort_by(|x, y| x.total_cmp(y));
+               let min = vec[0];
+               let max = vec[vec.len()-1];
+               let (median, lower_quartile, upper_quartile) = match vec.len() % 2 {
+                       0 => {
+                               (0.5 * (vec[vec.len() / 2 - 1] + vec[vec.len() / 2]),
+                               0.5 * (vec[vec.len() / 4 - 1] + vec[vec.len() / 4]),
+                               0.5 * (vec[vec.len() * 3 / 4 - 1] + vec[vec.len() *3 / 4]))
+                       },
+                       1 => {
+                               (vec[vec.len() / 2],
+                               vec[vec.len() / 4],
+                               vec[ vec.len() * 3 / 4])
+                       },
+                       _ => panic!("{} modulo 2 should not be greater or equal 2.", vec.len())
+               };
+               let fifty_percent_range = upper_quartile - lower_quartile;
+               let lower_whisker = vec
+                       .iter()
+                       .filter(|x| lower_quartile - *x <= 1.5 * fifty_percent_range)
+                       .fold(lower_quartile as f32, |acc, e| acc.min(*e));
+               let upper_whisker = vec
+                       .iter()
+                       .filter(|x| *x - upper_quartile <= 1.5 * fifty_percent_range)
+                       .fold(upper_quartile as f32, |acc, e| acc.max(*e));
+               let outliers = vec
+                       .into_iter()
+                       .filter(|x| lower_quartile - *x > 1.5 * fifty_percent_range || *x - upper_quartile > 1.5 * fifty_percent_range)
+                       .collect::<Vec<f32>>();
+               
+               BoxPlot { min, max, median, lower_quartile, upper_quartile, lower_whisker, upper_whisker, outliers }
+       }
+       
+       pub fn outliers(&self) -> &Vec<f32> {
+               &self.outliers
+       }
+}
+
+impl Histogram<usize> {
+       /// Create a [`Histogram`] from a history of (simulation) results and a threshold.
+       ///
+       /// - The history of results is a [`Vec`] of integers of type [`usize`].
+       /// - The threshold is the maximum integer until values are considered as outliers.
+       ///
+       /// For all integers in `0..threshold`, the number of occurences in the history is computed.
+       ///
+       /// Panics if the vector is empty.
+       ///
+       /// # Example
+       /// ```
+       /// use fj_contracting::statistics::Histogram;
+       ///
+       /// // Specify a history.
+       /// let history = vec![2, 2, 8, 1, 3, 3, 3, 6, 2, 5, 5];
+       ///
+       /// // Create a histogram from this history with threshold 5.
+       /// let histogram = Histogram::from_nonempty_usize_history_vec(history, 5);
+       ///
+       /// let histogram_output = format!("{histogram}");
+       /// // The minimum is 1, the maximum is 8, the total value is 40, and
+       /// // there are two values that are strictly bigger than the threshold.
+       /// assert_eq!(histogram_output, "min: 1, max: 8, total: 40, outlier_count: 2");
+       ///
+       /// // In the history, 0 did not occur, 1 occured once, 2 occured three times, ... 
+       /// assert_eq!(*histogram.distribution(), vec![0,1,3,3,0,2]);
+       /// ```
+       pub fn from_nonempty_usize_history_vec(mut vec: Vec<usize>, threshold: usize) -> Histogram<usize> {
+               vec.sort();
+               let min = vec[0];
+               let max = vec[vec.len()-1];
+               let total = vec.iter().sum();
+               
+               let mut distribution: Vec<usize> = vec![0;threshold+1];
+               let mut outlier_count = 0;
+               for entry in vec {
+                       if entry <= threshold {
+                               distribution[entry] += 1;
+                       } else {
+                               outlier_count += 1;
+                       }
+               }
+               
+               Histogram { min, max, total, distribution, outlier_count, threshold }
+       }
+       
+       /// Return the threshold.
+       pub fn threshold(&self) -> usize {
+               self.threshold
+       }
+       
+       /// Return the histogram (distribtion).
+       pub fn distribution(&self) -> &Vec<usize> {
+               &self.distribution
+       }
+}
+
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    
+    #[test]
+    fn boxplot_even_history_length() {
+               let history = vec![14., 12., 9., 15., 7., 4., 28., 3.];
+               let bplot = BoxPlot::from_nonempty_f32_history_vec(history);
+               assert_eq!(bplot.min, 3.);
+               assert_eq!(bplot.max, 28.);
+               assert_eq!(bplot.median, 21./2.);
+               assert_eq!(bplot.lower_quartile, 11./2.);
+               assert_eq!(bplot.upper_quartile, 29./2.);
+               assert_eq!(bplot.lower_whisker, 3.);
+               assert_eq!(bplot.upper_whisker, 28.);
+               assert_eq!(bplot.outliers, vec![]);
+    }
+    
+    #[test]
+    fn boxplot_tiny() {
+               let history = vec![14.,];
+               let bplot = BoxPlot::from_nonempty_f32_history_vec(history);
+               assert_eq!(bplot.median, bplot.lower_whisker);
+       }
+}