Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor code style and typo fixes #48

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ dmsort = "1.0.1"

[features]
default=["vec-id-sets"]
# Enable the parallel runtime implementation todo make default
# Enable the parallel runtime implementation # TODO make default
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a little weird. Also, I'm not sure I understand what's left to do.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was to make the parallel-runtime feature a default feature of the crate. Currently if you want parallel execution you need to enable it explicitly with a lot of ceremony: https://github.com/lf-lang/lingua-franca/blob/4f8d8893574e3c76fd30c2515c9be9a7444e4f1f/test/Rust/src/target/CargoDependencyOnRuntime.lf#L5

parallel-runtime=["rayon"]
# Enables 64-bit wide reaction ids on 64 bit architectures.
# Enables 64-bit wide reaction ids on 64-bit architectures.
# This may reduce performance, but allows for 2^32 reactor
# instances compared to the default of 2^16, which may feel
# a bit tight for some applications.
Expand Down
4 changes: 1 addition & 3 deletions benches/savina_pong.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,7 @@ The ping/pong game from Savina benchmarks. This can be compared
to the C implementation (see results.md).

See original at https://github.com/icyphy/lingua-franca/blob/f5868bec199e02f784393f32b594be5df935e2ee/benchmark/C/Savina/PingPong.lf


*/
*/

criterion_group!(benches, reactor_main);
criterion_main!(benches);
Expand Down
6 changes: 2 additions & 4 deletions src/actions.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ pub(crate) struct Action<Kind, T: Sync> {
_logical: PhantomData<Kind>,

/// Stores values of an action for future scheduled events.
/// We rely strongly on the fact that any value put in there by [Action.schedule_future_value]
/// will be cleaned up after that tag. Otherwise the map will
/// We rely strongly on the fact that any value put therein by [Action.schedule_future_value]
/// will be cleaned up after that tag. Otherwise, the map will
/// blow up the heap.
map: VecMap<Reverse<EventTag>, Option<T>>,
}
Expand All @@ -61,8 +61,6 @@ impl<K, T: Sync> Action<K, T> {
/// Record a future value that can be queried at a future logical time.
/// Note that we don't check that the given time is in the future. If it's
/// in the past, the value will never be reclaimed.
///
///
#[inline]
pub(crate) fn schedule_future_value(&mut self, time: EventTag, value: Option<T>) {
match self.map.entry(Reverse(time)) {
Expand Down
6 changes: 1 addition & 5 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,10 @@ extern crate array_macro;
#[macro_use]
extern crate assert_matches;
#[macro_use]
extern crate index_vec;
#[macro_use]
extern crate log;
#[cfg(feature = "parallel-runtime")]
extern crate rayon;
#[macro_use]
extern crate smallvec;
#[macro_use]
extern crate static_assertions;
#[macro_use]
extern crate cfg_if;
Expand Down Expand Up @@ -114,7 +110,7 @@ pub mod prelude {
};

/// Alias for the unit type, so that it can be written without quotes in LF.
/// Otherwise it needs to be written `{= () =}`.
/// Otherwise, it needs to be written `{= () =}`.
/// It is not camel-case as it is actually a primitive type.
#[allow(non_camel_case_types)]
pub type unit = ();
Expand Down
2 changes: 1 addition & 1 deletion src/ports.rs
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@ struct PortCell<T: Sync> {
/// field of that equiv class is updated to contain B.
///
/// Why?
/// When you have bound eg A -> B and *then* bind U -> A,
/// When you have bound e.g. A -> B and *then* bind U -> A,
/// then both the equiv class of A and B (the downstream of A)
/// need to be updated to point to the equiv class of U
///
Expand Down
24 changes: 12 additions & 12 deletions src/scheduler/assembly_impl.rs
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ where
}

/// Final result of the assembly of a reactor.
pub struct FinishedReactor<'x, S>(AssemblyCtx<'x, S>, S)
pub struct FinishedReactor<S>(S)
where
S: ReactorInitializer;

Expand Down Expand Up @@ -147,9 +147,9 @@ where
pub fn assemble(
self,
build_reactor_tree: impl FnOnce(Self) -> AssemblyResult<AssemblyIntermediate<'x, S>>,
) -> AssemblyResult<FinishedReactor<'x, S>> {
let AssemblyIntermediate(ich, reactor) = build_reactor_tree(self)?;
Ok(FinishedReactor(ich, reactor))
) -> AssemblyResult<FinishedReactor<S>> {
let AssemblyIntermediate(_, reactor) = build_reactor_tree(self)?;
Ok(FinishedReactor(reactor))
}

/// Innermost function.
Expand All @@ -160,7 +160,7 @@ where
reaction_names: [Option<&'static str>; N],
declare_dependencies: impl FnOnce(&mut DependencyDeclarator<S>, &mut S, [GlobalReactionId; N]) -> AssemblyResult<()>,
) -> AssemblyResult<AssemblyIntermediate<'x, S>> {
// todo when feature(generic_const_exprs) is stabilized,
// TODO when feature(generic_const_exprs) is stabilized,
// replace const parameter N with S::MAX_REACTION_ID.index().
debug_assert_eq!(N, S::MAX_REACTION_ID.index(), "Should initialize all reactions");

Expand Down Expand Up @@ -199,7 +199,7 @@ where
/// priority edges, as they are taken to be those declared
/// in LF by the user.
/// The rest do not have priority edges, and their
/// implementation must hence have no observable side-effect.
/// implementation must hence have no observable side effect.
fn new_reactions<const N: usize>(
&mut self,
my_id: ReactorId,
Expand Down Expand Up @@ -303,13 +303,12 @@ where
}
}

impl<S> FinishedReactor<'_, S>
impl<S> FinishedReactor<S>
where
S: ReactorInitializer,
{
fn finish(self) -> S {
let FinishedReactor(_, reactor) = self;
reactor
self.0
}
}

Expand Down Expand Up @@ -362,8 +361,9 @@ impl<S: ReactorInitializer> DependencyDeclarator<'_, '_, S> {

/// Bind the ports of the upstream to those of the downstream,
/// as if zipping both iterators.
/// todo this will just throw away bindings if both iterators are not of the same size
/// normally this should be reported by LFC as a warning, maybe we should implement the same thing here
/// TODO this will just throw away bindings if both iterators are not of the same size
/// normally this should be reported by LFC as a warning, maybe we should implement the same
/// thing here
#[inline]
pub fn bind_ports_zip<'a, T: Sync + 'a>(
&mut self,
Expand Down Expand Up @@ -489,7 +489,7 @@ impl<S: ReactorInitializer> ComponentCreator<'_, '_, S> {
/// we trust the code generator to fail if a port is both on
/// the LHS and RHS of a connection.
///
/// This is necessary to be iterate the same bank over distinct
/// This is necessary to be able to iterate the same bank over distinct
/// ports or multiports to bind them together.
#[macro_export]
#[doc(hidden)]
Expand Down
22 changes: 16 additions & 6 deletions src/scheduler/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -271,14 +271,24 @@ impl<'a, 'x> ReactionCtx<'a, 'x> {
self.debug_info.display_reaction(self.current_reaction.unwrap()),
);
}
// todo
PortKind::ChildInputReference => {}
PortKind::ChildOutputReference => {}
PortKind::ChildInputReference => {
// Here there is actually no way we can check this at runtime currently.
// The problem is that the port ID's container is this reactor and not the
// child reactor. This doesn't matter much in practice as programs generated
// with LFC will never fail these checks anyway.
}
PortKind::ChildOutputReference => {
panic!(
"Cannot set an output port of a child (port {} set by reaction {})!",
self.debug_info.id_registry.fmt_component(port_id),
self.debug_info.display_reaction(self.current_reaction.unwrap()),
)
}
}
}

/// Sets the value of the given port, if the given value is `Some`.
/// Otherwise the port is not set and no reactions are triggered.
/// Otherwise, the port is not set and no reactions are triggered.
///
/// The change is visible at the same logical time, i.e.
/// the value propagates immediately. This may hence
Expand Down Expand Up @@ -724,7 +734,7 @@ pub enum Offset {
/// Specify that the trigger will fire at least after
/// the provided duration.
///
/// If the duration is zero (eg [Asap](Self::Asap)), it does not
/// If the duration is zero (e.g. [Asap](Self::Asap)), it does not
/// mean that the trigger will fire right away. For actions, the
/// action's inherent minimum delay must be taken into account,
/// and even with a zero minimal delay, a delay of one microstep
Expand Down Expand Up @@ -786,7 +796,7 @@ pub struct CleanupCtx {

impl CleanupCtx {
pub fn cleanup_multiport<T: Sync>(&self, port: &mut Multiport<T>) {
// todo bound ports don't need to be cleared
// TODO bound ports don't need to be cleared
for channel in port {
channel.clear_value()
}
Expand Down
6 changes: 3 additions & 3 deletions src/scheduler/debug.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ pub(crate) struct DebugInfoRegistry {

main_reactor: Option<ReactorId>,

// todo better data structure, eg IndexVec<ReactorId, IndexVec<LocalReactionId, _>>
// TODO better data structure, e.g. IndexVec<ReactorId, IndexVec<LocalReactionId, _>>
/// Labels of each reaction, only reactions that have one are in here.
reaction_labels: HashMap<GlobalReactionId, Cow<'static, str>>,
}
Expand Down Expand Up @@ -173,7 +173,7 @@ impl DebugInfoRegistry {
// ie, we're the first component of the next reactor.
Ok(rid) => (rid.plus(1), 0usize),
// Here, rid is the reactor which contains the trigger.
// Eg if you have reactor_bound=[2, 4],
// E.g. if you have reactor_bound=[2, 4],
// that corresponds to two reactors [2..2, 2..4].
// If you ask for 2, it will take the branch Ok above.
// If you ask for 3, it will fail with Err(0), and reactor_bound[0]==2
Expand Down Expand Up @@ -233,7 +233,7 @@ pub(crate) struct ReactorDebugInfo {
/// Simple name of the instantiation (last segment of the path)
#[allow(unused)]
pub inst_name: &'static str,
/// Path to this instantiation, with trailing slash (eg `"/parent/child/"`)
/// Path to this instantiation, with trailing slash (e.g. `"/parent/child/"`)
inst_path: String,
}

Expand Down
23 changes: 11 additions & 12 deletions src/scheduler/dependencies.rs
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ pub(super) struct DepGraph {
ix_by_id: HashMap<GraphId, GraphIx>,

/// Map of multiport component ID -> multiport ID.
/// todo data structure is bad.
/// TODO data structure is bad.
multiport_containment: HashMap<GraphId, TriggerId>,
/// Map of multiport ID -> range of IDs for its channels
multiport_ranges: VecMap<TriggerId, Range<TriggerId>>,
Expand Down Expand Up @@ -342,7 +342,6 @@ enum EdgeWeight {
/// if they're labeled `Default`, they're trigger dependencies,
/// otherwise use dependencies.
Default,
///
Use,
}

Expand Down Expand Up @@ -375,7 +374,7 @@ impl ReactionLevelInfo {
pub(super) struct DataflowInfo {
/// Maps each trigger to the set of reactions that need
/// to be scheduled when it is triggered.
/// Todo: many of those are never asked for, eg those of bound ports
/// TODO: many of those are never asked for, e.g. those of bound ports
trigger_to_plan: IndexVec<TriggerId, Arc<ExecutableReactions<'static>>>,
}

Expand All @@ -398,7 +397,7 @@ impl DataflowInfo {
// if let Some(_multiport_id) = multiport_containment.get(&dataflow[trigger].id) {
// assert_eq!(dataflow[trigger].kind, NodeKind::Port);
// todo!("multiports")
// todo this is a multiport channel:
// TODO this is a multiport channel:
// 1. if someone has declared a dependency on this individual channel, collect dependencies into DEPS
// 2. else add trigger to DELAY goto 4
// 3. merge DEPS into dependencies ALL for the whole multiport
Expand Down Expand Up @@ -569,7 +568,7 @@ impl From<u32> for LevelIx {
pub struct ExecutableReactions<'x> {
/// An ordered list of levels to execute.
///
/// It must by construction be the case that a reaction
/// It must, by construction, be the case that a reaction
/// in level `i` has no dependency(1) on reactions in levels `j >= i`.
/// This way, the execution of reactions in the same level
/// may be parallelized.
Expand All @@ -593,7 +592,7 @@ impl<'x> ExecutableReactions<'x> {

/// Returns an iterator which associates batches of reactions
/// with their level. Note that this does not mutate this collection
/// (eg drain it), because that way we can use borrowed Cows
/// (e.g. drain it), because that way we can use borrowed Cows
/// and avoid more allocation.
pub fn batches(&self) -> impl Iterator<Item = &(LevelIx, Cow<'x, Level>)> + '_ {
self.levels.iter()
Expand Down Expand Up @@ -645,7 +644,7 @@ impl<'x> ExecutableReactions<'x> {
if e.get_mut().is_empty() {
e.replace(src_level.clone());
} else {
// todo maybe set is not modified by the union
// TODO maybe set is not modified by the union
e.get_mut().to_mut().extend(src_level.iter());
}
}
Expand Down Expand Up @@ -673,7 +672,7 @@ impl<'x> ExecutableReactions<'x> {
Self::merge_plans_after(x, y, LevelIx::ZERO)
}

// todo would be nice to simplify this, it's hot
// TODO It would be nice to simplify this, it's hot
/// Produce the set union of two reaction plans.
/// Levels below the `min_level` are not merged, and the caller
/// shouldn't query them. For all levels >= `min_level`,
Expand Down Expand Up @@ -890,9 +889,9 @@ pub mod test {
test.graph.triggers_reaction(p0, n2);
test.graph.triggers_reaction(p1, n2);

// connect to prev_in
// connect to `prev_in`
test.graph.triggers_reaction(prev_in, n1);
// replace prev_in with out
// replace `prev_in` with `out`
test.graph.reaction_effects(n2, out);
prev_in = out;
}
Expand Down Expand Up @@ -930,9 +929,9 @@ pub mod test {
test.graph.triggers_reaction(p01, n2);
test.graph.triggers_reaction(p11, n2);

// connect to prev_in
// connect to `prev_in`
test.graph.triggers_reaction(prev_in, n1);
// replace prev_in with out
// replace `prev_in` with `out`
test.graph.reaction_effects(n2, out);
prev_in = out;
}
Expand Down
4 changes: 2 additions & 2 deletions src/scheduler/events.rs
Original file line number Diff line number Diff line change
Expand Up @@ -212,12 +212,12 @@ impl<'x> EventQueue<'x> {
self.value_list.pop_front()
}

// todo perf: we could make a more optimal function to push a
// TODO perf: we could make a more optimal function to push a
// lot of events at once. Consider the following algorithm:
// - start with a sorted `self.value_list` and a (non-sorted) `new_evts: Vec<Event>`
// - sort the new events in place (in a Cow maybe). They'll
// probably come in already sorted but we can't assume this.
// Use an algorithm that best-cases for sorted data. (eg https://crates.io/crates/dmsort)
// Use an algorithm that best-cases for sorted data. (e.g. https://crates.io/crates/dmsort)
// - take the earliest new event and binary search to insert it.
// - then do the same thing but only on the remaining (to the right)
// portion of `self.value_list`. Basically the routine of an insertion
Expand Down
14 changes: 7 additions & 7 deletions src/scheduler/scheduler_impl.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,9 +83,9 @@ macro_rules! push_event {

/// The runtime scheduler.
///
/// Lifetime parameters: 'x and 't are carried around everywhere,
/// 'x allows us to take references into the dataflow graph, and
/// 't to spawn new scoped threads for physical actions. 'a is more
/// Lifetime parameters: `'x` and `'t` are carried around everywhere,
/// `'x` allows us to take references into the dataflow graph, and
/// `'t` to spawn new scoped threads for physical actions. `'a` is more
/// useless but is needed to compile.
pub struct SyncScheduler<'x> {
/// The latest processed logical time (necessarily behind physical time).
Expand Down Expand Up @@ -351,8 +351,8 @@ impl<'x> SyncScheduler<'x> {
}
Err(RecvTimeoutError::Timeout) => { /*great*/ }
Err(RecvTimeoutError::Disconnected) => {
// ok, there are no physical actions in the program so it's useless to block on self.rx
// we still need to wait though..
// ok, there are no physical actions in the program, so it's useless to block on self.rx
// we still need to wait though...
if let Some(remaining) = target.checked_duration_since(Instant::now()) {
std::thread::sleep(remaining);
}
Expand Down Expand Up @@ -446,7 +446,7 @@ impl<'x> SyncScheduler<'x> {
push_event!(self, evt)
}

// cleanup tag-specific resources, eg clear port values
// cleanup tag-specific resources, e.g. clear port values
let ctx = CleanupCtx { tag };
// TODO measure performance of cleaning up all reactors w/ virtual dispatch like this.
// see also efforts in the C runtime to avoid this
Expand Down Expand Up @@ -497,7 +497,7 @@ mod parallel_rt_impl {
unsafe impl<T> Sync for UnsafeSharedPointer<T> {}

/// We need a Clone bound to use fold_with, but this clone
/// implementation is not general purpose so I hide it.
/// implementation is not general purpose, so I hide it.
struct CloneableCtx<'a, 'x>(ReactionCtx<'a, 'x>);

impl Clone for CloneableCtx<'_, '_> {
Expand Down
2 changes: 1 addition & 1 deletion src/triggers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ pub struct TriggerId(TriggerIdImpl);
// info. But it also forces us to use inefficient data structures to get a Map<TriggerId, ...>,
// because ids were not allocated consecutively. We were previously using
// hashmaps, now we use IndexVec.
// Also the structure of GlobalId used to set relatively low
// Also, the structure of GlobalId used to set relatively low
// ceilings on the number of components and reactions of each
// reactor. Previously, we could have max 2^16 (reactions+components)
// per reactor. Now we can have 2^16 reactions per reactor,
Expand Down
Loading
Loading