Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate segmentation in infer neurons task #8221

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

cdfhalle
Copy link

@cdfhalle cdfhalle commented Nov 20, 2024

  • adds the functionality to evaluate a neuron segmentation using the ai modal.
  • In the neuron segmentation tab of the modal the user (a scalable minds employee) can now additionally open the evaluation settings form and type in parameters for the evaluate-split-merger task in voxelytics. Whenever the evaluation settings are used the evaluation should happen subsequently to the inference and the error annotation should be uploaded to the resulting dataset.

Steps to test:

TODOs:

  • ...

Issues:


(Please delete unneeded items, merge only when none are left open)

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced job submission process for neuron inference with additional evaluation parameters.
    • Introduced a new component for configuring evaluation settings in AI jobs.
    • Users can now initiate a split-merger evaluation during neuron inference.
  • Improvements

    • Updated API to support new evaluation-related options during job initiation.
    • Improved sorting logic for job listings by creation date.
    • Asynchronous reading of image files from the datastore filesystem for better performance.
    • Enhanced error messages for job initiation on datasets from other organizations.
    • Removed bounding box size restriction for inferral jobs for super users.
  • Bug Fixes

    • Fixed performance bottleneck when deleting multiple trees simultaneously.
    • Resolved issue with importing NML files that contain groups but no trees.
    • Fixed critical bug where deleting a non-existing node via the API would delete the entire active tree.
    • Removed Google Analytics integration.

Copy link

coderabbitai bot commented Nov 20, 2024

Walkthrough

The pull request introduces enhancements to the job submission process within the application. Key modifications include updates to the JobController methods to accept additional parameters for neuron inference and TIFF export jobs. The routing configuration for the inferNeurons endpoint has been adjusted to accommodate these new parameters. Furthermore, the frontend JavaScript files have been updated to reflect these changes, allowing for more detailed job configurations. A new component for evaluation settings has been added to improve user interaction when starting AI jobs.

Changes

File Path Change Summary
app/controllers/JobController.scala Updated runInferNeuronsJob and runExportTiffJob methods to include new parameters for enhanced job configuration.
conf/webknossos.latest.routes Modified the POST /jobs/run/inferNeurons route to accept additional parameters for neuron inference jobs.
frontend/javascripts/admin/api/jobs.ts Updated startNeuronInferralJob function to include new parameters for evaluation settings; improved job sorting logic in getJobs function.
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx Added CollapsibleEvaluationSettings component for configuring evaluation settings; updated StartJobForm and NeuronSegmentationForm to incorporate new evaluation parameters and improve error handling during job initiation.

Possibly related PRs

Suggested labels

new feature, enhancement, frontend, backend

Suggested reviewers

  • MichaelBuessemeyer
  • dieknolle3333

Poem

In the meadow where jobs take flight,
New settings bloom, oh what a sight!
With parameters fresh, our jobs now gleam,
Evaluation's sweet, like a rabbit's dream.
Hop along, let the changes flow,
In the world of jobs, we’ll steal the show! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (2)
frontend/javascripts/admin/api/jobs.ts (1)

196-201: Consider improving parameter types and adding documentation.

While the new parameters align well with the evaluation functionality:

  1. Consider using numeric types instead of strings for measurement parameters:
    • evalMaxEdgeLength
    • evalSparseTubeThresholdNm
    • evalMinMergerPathLengthNm
  2. Consider adding JSDoc comments to document the purpose and expected values of each parameter.
+/**
+ * Starts a neuron inference job with optional evaluation.
+ * @param organizationId - Organization identifier
+ * @param datasetName - Name of the dataset
+ * @param layerName - Name of the layer
+ * @param bbox - Bounding box coordinates
+ * @param newDatasetName - Name for the new dataset
+ * @param doEvaluation - Whether to perform evaluation
+ * @param annotationId - ID of the annotation for evaluation
+ * @param useSparseTracing - Whether to use sparse tracing
+ * @param evalMaxEdgeLength - Maximum edge length in nanometers
+ * @param evalSparseTubeThresholdNm - Sparse tube threshold in nanometers
+ * @param evalMinMergerPathLengthNm - Minimum merger path length in nanometers
+ */
 export function startNeuronInferralJob(
   // ... existing parameters ...
   doEvaluation: boolean,
   annotationId?: string,
   useSparseTracing?: boolean,
-  evalMaxEdgeLength?: string,
-  evalSparseTubeThresholdNm?: string,
-  evalMinMergerPathLengthNm?: string,
+  evalMaxEdgeLength?: number,
+  evalSparseTubeThresholdNm?: number,
+  evalMinMergerPathLengthNm?: number,
conf/webknossos.latest.routes (1)

267-267: Consider refactoring to use a request body.

The endpoint has many parameters which makes it harder to maintain and use. Consider refactoring to accept a JSON request body instead of query parameters. This would improve readability and make it easier to add new parameters in the future.

-POST          /jobs/run/inferNeurons/:organizationId/:datasetName                               controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
+POST          /jobs/run/inferNeurons/:organizationId/:datasetName                               controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String)

And create a case class for the request body:

case class InferNeuronsRequest(
  layerName: String,
  bbox: String,
  newDatasetName: String,
  doEvaluation: Boolean,
  annotationId: Option[String],
  evalUseSparseTracing: Option[Boolean],
  evalMaxEdgeLength: Option[Double],
  evalSparseTubeThresholdNm: Option[Double],
  evalMinMergerPathLengthNm: Option[Double]
)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3c7700c and 0fa0cfd.

📒 Files selected for processing (4)
  • app/controllers/JobController.scala (2 hunks)
  • conf/webknossos.latest.routes (1 hunks)
  • frontend/javascripts/admin/api/jobs.ts (1 hunks)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
🔇 Additional comments (5)
app/controllers/JobController.scala (2)

229-235: 🛠️ Refactor suggestion

Consider refactoring evaluation parameters into a case class

The method signature is becoming quite long with 11 parameters. Consider creating a dedicated case class for evaluation parameters to improve maintainability and readability.

case class NeuronEvaluationParams(
  doEvaluation: Boolean,
  annotationId: Option[String],
  useSparseTracing: Option[Boolean],
  maxEdgeLength: Option[Double],  // Changed from String to Double
  sparseTubeThresholdNm: Option[Double],
  minMergerPathLengthNm: Option[Double]
)

Also, consider adding parameter validation for the numerical values to ensure they are within acceptable ranges.


257-262: 🛠️ Refactor suggestion

Add parameter validation and improve JSON structure

The command arguments should include validation and proper structuring of evaluation parameters:

  1. Consider validating and transforming string parameters to numerical values before adding to JSON
  2. Group evaluation parameters in a nested object for better organization
- "do_evaluation" -> doEvaluation,
- "annotation_id" -> annotationId,
- "eval_use_sparse_tracing" -> evalUseSparseTracing,
- "eval_max_edge_length" -> evalMaxEdgeLength,
- "eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm,
- "eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm,
+ "evaluation" -> {
+   if (doEvaluation) {
+     Json.obj(
+       "enabled" -> true,
+       "annotation_id" -> annotationId,
+       "parameters" -> Json.obj(
+         "use_sparse_tracing" -> evalUseSparseTracing,
+         "max_edge_length" -> evalMaxEdgeLength.map(_.toDouble),
+         "sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm.map(_.toDouble),
+         "min_merger_path_length_nm" -> evalMinMergerPathLengthNm.map(_.toDouble)
+       )
+     )
+   } else {
+     Json.obj("enabled" -> false)
+   }
+ }

This change:

  • Groups evaluation parameters logically
  • Adds basic validation for numerical values
  • Improves readability of the command structure

Please ensure that these changes align with the voxelytics service's expected payload structure.

conf/webknossos.latest.routes (1)

267-267: LGTM! Consider reviewing parameter types.

The endpoint changes correctly implement the evaluation settings functionality. However, verify if evalMaxEdgeLength, evalSparseTubeThresholdNm, and evalMinMergerPathLengthNm should be numeric types instead of String.

frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (2)

104-104: Addition of jobSpecificInputFields prop looks good

The addition of the optional jobSpecificInputFields prop to StartJobFormProps enhances the flexibility of the StartJobForm component by allowing custom input fields to be injected. This implementation is appropriate and follows good practices.


797-799: Passing CollapsibleEvaluationSettings as jobSpecificInputFields

Integrating the CollapsibleEvaluationSettings component through the jobSpecificInputFields prop enhances the modularity of the NeuronSegmentationForm. This approach maintains the flexibility of the StartJobForm component and adheres to good React practices.

Comment on lines +207 to +215
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
urlParams.append("annotationId", `${annotationId}`);
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
Copy link

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add parameter validation for evaluation settings.

The code should validate optional parameters before appending them to prevent potential undefined values from being sent to the API.

   if (doEvaluation) {
+    if (!annotationId) {
+      throw new Error("annotationId is required when doEvaluation is true");
+    }
     urlParams.append("annotationId", `${annotationId}`);
-    urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
-    urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
-    urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
-    urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+    if (useSparseTracing != null) {
+      urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
+    }
+    if (evalMaxEdgeLength != null) {
+      urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
+    }
+    if (evalSparseTubeThresholdNm != null) {
+      urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
+    }
+    if (evalMinMergerPathLengthNm != null) {
+      urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+    }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
urlParams.append("annotationId", `${annotationId}`);
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
if (!annotationId) {
throw new Error("annotationId is required when doEvaluation is true");
}
urlParams.append("annotationId", `${annotationId}`);
if (useSparseTracing != null) {
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
}
if (evalMaxEdgeLength != null) {
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
}
if (evalSparseTubeThresholdNm != null) {
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
}
if (evalMinMergerPathLengthNm != null) {
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its ok to do this suggestion, but this isn't a must imo.

The backend should at last realize some params are missing (see my comment above)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer, understood. If the backend handles missing parameters, additional validation on the frontend is indeed optional.


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/admin/api/jobs.ts:207-215
Timestamp: 2024-11-25T14:52:20.691Z
Learning: In the webknossos codebase, it's acceptable to rely on the backend to handle missing parameters, so additional parameter validation in the frontend is optional.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
Copy link

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Empty Form.Item without a child component

The Form.Item at line 584 does not contain any input or form control as its child. Form.Item is intended to wrap form controls like inputs, checkboxes, or selectors. An empty Form.Item may lead to unexpected behavior and should be avoided. Please ensure that you include the appropriate form control within this Form.Item.

Apply this diff to address the issue:

-                <Form.Item name="useAnnotation" initialValue={true} />
+                {/* Remove this empty Form.Item or add a child component if needed */}

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jup, the AI is correct here. But it seems you simply want the form data to always have useAnnotation=true in case an evaluation is done, correct? In that case, I'd suggest to make the form item hidden:

Suggested change
<Form.Item name="useAnnotation" initialValue={true} />
<Form.Item name="useAnnotation" initialValue={true} hidden />

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer, thank you for the clarification. Yes, making the Form.Item hidden with initialValue={true} is a good solution to ensure that useAnnotation is always set to true when evaluation is done, without rendering an empty form item. I've noted this for future reviews.


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.511Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines +530 to +593
function CollapsibleEvaluationSettings({
isActive = false,
setActive,
}: { isActive: boolean; setActive: (active: boolean) => void }) {
return (
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
);
}
Copy link

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect usage of tooltip prop in Form.Item components

The Form.Item component from Ant Design does not support a tooltip prop directly. To display a tooltip for the label, you should wrap the label content with the Tooltip component. This ensures that the tooltip behaves as expected and follows the Ant Design specifications.

Apply this diff to fix the issue:

-                <Form.Item
-                  label="Use sparse ground truth tracing"
-                  name={["evaluationSettings", "useSparseTracing"]}
-                  valuePropName="checked"
-                  initialValue={false}
-                  tooltip="The evaluation mode can either be `dense`
-      in case all processes in the volume are annotated in the ground-truth.
-      If not, use the `sparse` mode."
-                >
+                <Form.Item
+                  label={
+                    <span>
+                      Use sparse ground truth tracing&nbsp;
+                      <Tooltip title="The evaluation mode can either be `dense` in case all processes in the volume are annotated in the ground-truth. If not, use the `sparse` mode.">
+                        <InfoCircleOutlined />
+                      </Tooltip>
+                    </span>
+                  }
+                  name={["evaluationSettings", "useSparseTracing"]}
+                  valuePropName="checked"
+                  initialValue={false}
+                >
                   <Checkbox style={{ width: "100%" }} />
                 </Form.Item>

Apply similar changes to the other Form.Item components with tooltips.

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but you are wrong here AI :)
The docs state that there is a tooltip prop

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say to also ignore that @cdfhalle

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Comment on lines +535 to +591
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
Copy link

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential issue with Collapse component's onChange handler

The onChange handler for the Collapse component receives the active key(s) as arguments. Currently, the code toggles isActive using !isActive without considering these arguments, which might lead to synchronization issues between the Collapse state and the isActive state. It is recommended to use the provided key parameter to update the isActive state accurately.

Apply this diff to improve the implementation:

        <Collapse
          style={{ marginBottom: 8 }}
-         onChange={() => setActive(!isActive)}
+         onChange={(key) => setActive(key.includes("evaluation"))}
          expandIcon={() => <Checkbox checked={isActive} />}
          items={[
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
<Collapse
style={{ marginBottom: 8 }}
onChange={(key) => setActive(key.includes("evaluation"))}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see the issue here.

I'd ignore that @cdfhalle

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

…com:scalableminds/webknossos into evaluate-segmentation-in-infer-neurons-task
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
CHANGELOG.unreleased.md (1)

Line range hint 31-31: Document API changes in Breaking Changes section

The PR includes significant API changes that should be documented in the Breaking Changes section:

 ### Breaking Changes
+- The `JobController.runInferNeuronsJob` method signature has been updated to include additional parameters for evaluation settings.
+- The routing configuration for the `inferNeurons` endpoint has been modified to support new evaluation parameters.
+- Frontend API changes: The `startNeuronInferralJob` function now requires additional parameters for evaluation configuration.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0fa0cfd and 785ddff.

📒 Files selected for processing (1)
  • CHANGELOG.unreleased.md (1 hunks)

@@ -11,6 +11,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
[Commits](https://github.com/scalableminds/webknossos/compare/24.11.1...HEAD)

### Added
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
Copy link

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix typo and enhance the changelog entry

  1. Fix the typo: "inferrence" should be "inference"
  2. The entry should be expanded to better reflect the scope of changes:
-- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
+- It is now possible to start a split-merger evaluation when starting a neuron inference. The neuron segmentation tab now includes an evaluation settings form that allows configuring parameters for the evaluate-split-merger task. The evaluation runs immediately after inference, and error annotations are uploaded to the dataset. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
- It is now possible to start a split-merger evaluation when starting a neuron inference. The neuron segmentation tab now includes an evaluation settings form that allows configuring parameters for the evaluate-split-merger task. The evaluation runs immediately after inference, and error annotations are uploaded to the dataset. [#8221](https://github.com/scalableminds/webknossos/pull/8221)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say there is no need to such a details explanation. But the message has a little typo :)

Suggested change
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
- It is now possible to start a split-merger evaluation when starting a neuron inference. [#8221](https://github.com/scalableminds/webknossos/pull/8221)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer, understood! Thank you for the feedback. I'll keep the changelog entries concise.


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: CHANGELOG.unreleased.md:14-14
Timestamp: 2024-11-25T14:52:23.036Z
Learning: For the `webknossos` project, changelog entries in `CHANGELOG.unreleased.md` should be concise and avoid unnecessary detailed explanations.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great Work. Thanks a lot for your PR @cdfhalle
Especially how you integrated the new UI 👍

Testing went well 💪 but I would like to have a few minor points addressed. The main points are:

  1. Please rename almost all variable with evaluation in their name to include that the evaluation is a split-merger evaluation. E.g. doEvaluation -> doSplitMergerEvaluation. This makes it easier to understand what evaluation is meant, as maybe there might be other evaluations to other jobs or even potentially to the same job.
  2. Please ensure in case an evaluation should be done, that the backend validates that the required parameters are given (not None) / sent by the client and not that they are empty options (None).
  • The rest is minor stuff I'd say :)

@@ -226,7 +226,13 @@ class JobController @Inject()(
datasetName: String,
layerName: String,
bbox: String,
newDatasetName: String): Action[AnyContent] =
newDatasetName: String,
doEvaluation: Boolean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't know what kind of evaluation would be performed in case doEvaluation is set to true. (As I do not have work with voxelytics regularly) Therefore, I would prefer to make the name more explicit

Suggested change
doEvaluation: Boolean,
doSplitMergerEvaluation: Boolean,

@@ -248,6 +254,12 @@ class JobController @Inject()(
"new_dataset_name" -> newDatasetName,
"layer_name" -> layerName,
"bbox" -> bbox,
"do_evaluation" -> doEvaluation,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same renaming here

Suggested change
"do_evaluation" -> doEvaluation,
"do_split_merger_evaluation" -> doSplitMergerEvaluation,

@@ -264,7 +264,7 @@ POST /jobs/run/computeMeshFile/:organizationId/:datasetName
POST /jobs/run/computeSegmentIndexFile/:organizationId/:datasetName controllers.JobController.runComputeSegmentIndexFileJob(organizationId: String, datasetName: String, layerName: String)
POST /jobs/run/exportTiff/:organizationId/:datasetName controllers.JobController.runExportTiffJob(organizationId: String, datasetName: String, bbox: String, additionalCoordinates: Option[String], layerName: Option[String], mag: Option[String], annotationLayerName: Option[String], annotationId: Option[String], asOmeTiff: Boolean)
POST /jobs/run/inferNuclei/:organizationId/:datasetName controllers.JobController.runInferNucleiJob(organizationId: String, datasetName: String, layerName: String, newDatasetName: String)
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String)
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable renaming from above and adding spaces between param definition

Suggested change
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doSplitMergerEvaluation: Boolean, annotationId: Option[String], evalUseSparseTracing: Option[Boolean], evalMaxEdgeLength: Option[String], evalSparseTubeThresholdNm: Option[String], evalMinMergerPathLengthNm: Option[String])

@@ -193,12 +193,26 @@ export function startNeuronInferralJob(
layerName: string,
bbox: Vector6,
newDatasetName: string,
doEvaluation: boolean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
doEvaluation: boolean,
doSplitMergerEvaluation: boolean,

): Promise<APIJob> {
const urlParams = new URLSearchParams({
layerName,
bbox: bbox.join(","),
newDatasetName,
doEvaluation: doEvaluation.toString(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
doEvaluation: doEvaluation.toString(),
doSplitMergerEvaluation: doSplitMergerEvaluation.toString(),

And so on 🙈

Comment on lines +535 to +591
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see the issue here.

I'd ignore that @cdfhalle

Comment on lines +530 to +593
function CollapsibleEvaluationSettings({
isActive = false,
setActive,
}: { isActive: boolean; setActive: (active: boolean) => void }) {
return (
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say to also ignore that @cdfhalle

@@ -11,6 +11,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
[Commits](https://github.com/scalableminds/webknossos/compare/24.11.1...HEAD)

### Added
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say there is no need to such a details explanation. But the message has a little typo :)

Suggested change
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
- It is now possible to start a split-merger evaluation when starting a neuron inference. [#8221](https://github.com/scalableminds/webknossos/pull/8221)

Comment on lines +844 to +846
jobSpecificInputFields={
<CollapsibleEvaluationSettings isActive={useEvaluation} setActive={setUseEvaluation} />
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just guessing here: The split merger evaluation requires an annotation to work, doesn't it? And more specific a skeleton annotation is required, isn't it?

If that's so please only set the jobSpecificInputFields to CollapsibleEvaluationSettings if the user currently has an opened annotation with a skeleton annotation.
You can retrieve whether there is a skeleton tracing via:

  const hasSkeletonAnnotation = useSelector((state: OxalisState) => state.tracing.skeleton != null);

children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This from item with the check box looks a little quirky in vertical mode. Could you please make it horizontal?

<Form.Item
  ...
  layout="horizontal"
  />

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants