-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate segmentation in infer neurons task #8221
base: master
Are you sure you want to change the base?
Conversation
WalkthroughThe pull request introduces enhancements to the job submission process within the application. Key modifications include updates to the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (2)
frontend/javascripts/admin/api/jobs.ts (1)
196-201
: Consider improving parameter types and adding documentation.While the new parameters align well with the evaluation functionality:
- Consider using numeric types instead of strings for measurement parameters:
evalMaxEdgeLength
evalSparseTubeThresholdNm
evalMinMergerPathLengthNm
- Consider adding JSDoc comments to document the purpose and expected values of each parameter.
+/** + * Starts a neuron inference job with optional evaluation. + * @param organizationId - Organization identifier + * @param datasetName - Name of the dataset + * @param layerName - Name of the layer + * @param bbox - Bounding box coordinates + * @param newDatasetName - Name for the new dataset + * @param doEvaluation - Whether to perform evaluation + * @param annotationId - ID of the annotation for evaluation + * @param useSparseTracing - Whether to use sparse tracing + * @param evalMaxEdgeLength - Maximum edge length in nanometers + * @param evalSparseTubeThresholdNm - Sparse tube threshold in nanometers + * @param evalMinMergerPathLengthNm - Minimum merger path length in nanometers + */ export function startNeuronInferralJob( // ... existing parameters ... doEvaluation: boolean, annotationId?: string, useSparseTracing?: boolean, - evalMaxEdgeLength?: string, - evalSparseTubeThresholdNm?: string, - evalMinMergerPathLengthNm?: string, + evalMaxEdgeLength?: number, + evalSparseTubeThresholdNm?: number, + evalMinMergerPathLengthNm?: number,conf/webknossos.latest.routes (1)
267-267
: Consider refactoring to use a request body.The endpoint has many parameters which makes it harder to maintain and use. Consider refactoring to accept a JSON request body instead of query parameters. This would improve readability and make it easier to add new parameters in the future.
-POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String]) +POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String)And create a case class for the request body:
case class InferNeuronsRequest( layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String], evalUseSparseTracing: Option[Boolean], evalMaxEdgeLength: Option[Double], evalSparseTubeThresholdNm: Option[Double], evalMinMergerPathLengthNm: Option[Double] )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
app/controllers/JobController.scala
(2 hunks)conf/webknossos.latest.routes
(1 hunks)frontend/javascripts/admin/api/jobs.ts
(1 hunks)frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx
(7 hunks)
🔇 Additional comments (5)
app/controllers/JobController.scala (2)
229-235
: 🛠️ Refactor suggestion
Consider refactoring evaluation parameters into a case class
The method signature is becoming quite long with 11 parameters. Consider creating a dedicated case class for evaluation parameters to improve maintainability and readability.
case class NeuronEvaluationParams(
doEvaluation: Boolean,
annotationId: Option[String],
useSparseTracing: Option[Boolean],
maxEdgeLength: Option[Double], // Changed from String to Double
sparseTubeThresholdNm: Option[Double],
minMergerPathLengthNm: Option[Double]
)
Also, consider adding parameter validation for the numerical values to ensure they are within acceptable ranges.
257-262
: 🛠️ Refactor suggestion
Add parameter validation and improve JSON structure
The command arguments should include validation and proper structuring of evaluation parameters:
- Consider validating and transforming string parameters to numerical values before adding to JSON
- Group evaluation parameters in a nested object for better organization
- "do_evaluation" -> doEvaluation,
- "annotation_id" -> annotationId,
- "eval_use_sparse_tracing" -> evalUseSparseTracing,
- "eval_max_edge_length" -> evalMaxEdgeLength,
- "eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm,
- "eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm,
+ "evaluation" -> {
+ if (doEvaluation) {
+ Json.obj(
+ "enabled" -> true,
+ "annotation_id" -> annotationId,
+ "parameters" -> Json.obj(
+ "use_sparse_tracing" -> evalUseSparseTracing,
+ "max_edge_length" -> evalMaxEdgeLength.map(_.toDouble),
+ "sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm.map(_.toDouble),
+ "min_merger_path_length_nm" -> evalMinMergerPathLengthNm.map(_.toDouble)
+ )
+ )
+ } else {
+ Json.obj("enabled" -> false)
+ }
+ }
This change:
- Groups evaluation parameters logically
- Adds basic validation for numerical values
- Improves readability of the command structure
Please ensure that these changes align with the voxelytics service's expected payload structure.
conf/webknossos.latest.routes (1)
267-267
: LGTM! Consider reviewing parameter types.
The endpoint changes correctly implement the evaluation settings functionality. However, verify if evalMaxEdgeLength
, evalSparseTubeThresholdNm
, and evalMinMergerPathLengthNm
should be numeric types instead of String.
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (2)
104-104
: Addition of jobSpecificInputFields
prop looks good
The addition of the optional jobSpecificInputFields
prop to StartJobFormProps
enhances the flexibility of the StartJobForm
component by allowing custom input fields to be injected. This implementation is appropriate and follows good practices.
797-799
: Passing CollapsibleEvaluationSettings
as jobSpecificInputFields
Integrating the CollapsibleEvaluationSettings
component through the jobSpecificInputFields
prop enhances the modularity of the NeuronSegmentationForm
. This approach maintains the flexibility of the StartJobForm
component and adheres to good React practices.
doEvaluation: doEvaluation.toString(), | ||
}); | ||
if (doEvaluation) { | ||
urlParams.append("annotationId", `${annotationId}`); | ||
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`); | ||
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`); | ||
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`); | ||
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add parameter validation for evaluation settings.
The code should validate optional parameters before appending them to prevent potential undefined
values from being sent to the API.
if (doEvaluation) {
+ if (!annotationId) {
+ throw new Error("annotationId is required when doEvaluation is true");
+ }
urlParams.append("annotationId", `${annotationId}`);
- urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
- urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
- urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
- urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+ if (useSparseTracing != null) {
+ urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
+ }
+ if (evalMaxEdgeLength != null) {
+ urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
+ }
+ if (evalSparseTubeThresholdNm != null) {
+ urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
+ }
+ if (evalMinMergerPathLengthNm != null) {
+ urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
doEvaluation: doEvaluation.toString(), | |
}); | |
if (doEvaluation) { | |
urlParams.append("annotationId", `${annotationId}`); | |
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`); | |
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`); | |
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`); | |
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`); | |
} | |
doEvaluation: doEvaluation.toString(), | |
}); | |
if (doEvaluation) { | |
if (!annotationId) { | |
throw new Error("annotationId is required when doEvaluation is true"); | |
} | |
urlParams.append("annotationId", `${annotationId}`); | |
if (useSparseTracing != null) { | |
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`); | |
} | |
if (evalMaxEdgeLength != null) { | |
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`); | |
} | |
if (evalSparseTubeThresholdNm != null) { | |
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`); | |
} | |
if (evalMinMergerPathLengthNm != null) { | |
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`); | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its ok to do this suggestion, but this isn't a must imo.
The backend should at last realize some params are missing (see my comment above)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer, understood. If the backend handles missing parameters, additional validation on the frontend is indeed optional.
✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/admin/api/jobs.ts:207-215
Timestamp: 2024-11-25T14:52:20.691Z
Learning: In the webknossos codebase, it's acceptable to rely on the backend to handle missing parameters, so additional parameter validation in the frontend is optional.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
> | ||
<InputNumber style={{ width: "100%" }} placeholder="800" /> | ||
</Form.Item> | ||
<Form.Item name="useAnnotation" initialValue={true} /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty Form.Item
without a child component
The Form.Item
at line 584 does not contain any input or form control as its child. Form.Item
is intended to wrap form controls like inputs, checkboxes, or selectors. An empty Form.Item
may lead to unexpected behavior and should be avoided. Please ensure that you include the appropriate form control within this Form.Item
.
Apply this diff to address the issue:
- <Form.Item name="useAnnotation" initialValue={true} />
+ {/* Remove this empty Form.Item or add a child component if needed */}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jup, the AI is correct here. But it seems you simply want the form data to always have useAnnotation=true
in case an evaluation is done, correct? In that case, I'd suggest to make the form item hidden:
<Form.Item name="useAnnotation" initialValue={true} /> | |
<Form.Item name="useAnnotation" initialValue={true} hidden /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer, thank you for the clarification. Yes, making the Form.Item
hidden with initialValue={true}
is a good solution to ensure that useAnnotation
is always set to true
when evaluation is done, without rendering an empty form item. I've noted this for future reviews.
✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.511Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
function CollapsibleEvaluationSettings({ | ||
isActive = false, | ||
setActive, | ||
}: { isActive: boolean; setActive: (active: boolean) => void }) { | ||
return ( | ||
<Collapse | ||
style={{ marginBottom: 8 }} | ||
onChange={() => setActive(!isActive)} | ||
expandIcon={() => <Checkbox checked={isActive} />} | ||
items={[ | ||
{ | ||
key: "evaluation", | ||
label: "Evaluation Settings", | ||
children: ( | ||
<Row> | ||
<Col style={{ width: "100%" }}> | ||
<Form.Item | ||
label="Use sparse ground truth tracing" | ||
name={["evaluationSettings", "useSparseTracing"]} | ||
valuePropName="checked" | ||
initialValue={false} | ||
tooltip="The evaluation mode can either be `dense` | ||
in case all processes in the volume are annotated in the ground-truth. | ||
If not, use the `sparse` mode." | ||
> | ||
<Checkbox style={{ width: "100%" }} /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Max edge length in nm" | ||
name={["evaluationSettings", "maxEdgeLength"]} | ||
tooltip="Ground truth tracings can be densified so that | ||
nodes are at most max_edge_length nm apart. | ||
However, this can also introduce wrong nodes in curved processes." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="None" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse tube threshold in nm" | ||
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | ||
tooltip="Tube threshold for sparse evaluation, | ||
determining if a process is too far from the ground-truth." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse minimum merger path length in nm" | ||
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | ||
tooltip="Minimum ground truth path length of a merger component | ||
to be counted as a relevant merger (for sparse evaluation). | ||
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | ||
estimation makes sure no relevant mergers are ignored." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="800" /> | ||
</Form.Item> | ||
<Form.Item name="useAnnotation" initialValue={true} /> | ||
</Col> | ||
</Row> | ||
), | ||
}, | ||
]} | ||
activeKey={isActive ? "evaluation" : []} | ||
/> | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect usage of tooltip
prop in Form.Item
components
The Form.Item
component from Ant Design does not support a tooltip
prop directly. To display a tooltip for the label, you should wrap the label content with the Tooltip
component. This ensures that the tooltip behaves as expected and follows the Ant Design specifications.
Apply this diff to fix the issue:
- <Form.Item
- label="Use sparse ground truth tracing"
- name={["evaluationSettings", "useSparseTracing"]}
- valuePropName="checked"
- initialValue={false}
- tooltip="The evaluation mode can either be `dense`
- in case all processes in the volume are annotated in the ground-truth.
- If not, use the `sparse` mode."
- >
+ <Form.Item
+ label={
+ <span>
+ Use sparse ground truth tracing
+ <Tooltip title="The evaluation mode can either be `dense` in case all processes in the volume are annotated in the ground-truth. If not, use the `sparse` mode.">
+ <InfoCircleOutlined />
+ </Tooltip>
+ </span>
+ }
+ name={["evaluationSettings", "useSparseTracing"]}
+ valuePropName="checked"
+ initialValue={false}
+ >
<Checkbox style={{ width: "100%" }} />
</Form.Item>
Apply similar changes to the other Form.Item
components with tooltips.
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but you are wrong here AI :)
The docs state that there is a tooltip prop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say to also ignore that @cdfhalle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
<Collapse | ||
style={{ marginBottom: 8 }} | ||
onChange={() => setActive(!isActive)} | ||
expandIcon={() => <Checkbox checked={isActive} />} | ||
items={[ | ||
{ | ||
key: "evaluation", | ||
label: "Evaluation Settings", | ||
children: ( | ||
<Row> | ||
<Col style={{ width: "100%" }}> | ||
<Form.Item | ||
label="Use sparse ground truth tracing" | ||
name={["evaluationSettings", "useSparseTracing"]} | ||
valuePropName="checked" | ||
initialValue={false} | ||
tooltip="The evaluation mode can either be `dense` | ||
in case all processes in the volume are annotated in the ground-truth. | ||
If not, use the `sparse` mode." | ||
> | ||
<Checkbox style={{ width: "100%" }} /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Max edge length in nm" | ||
name={["evaluationSettings", "maxEdgeLength"]} | ||
tooltip="Ground truth tracings can be densified so that | ||
nodes are at most max_edge_length nm apart. | ||
However, this can also introduce wrong nodes in curved processes." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="None" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse tube threshold in nm" | ||
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | ||
tooltip="Tube threshold for sparse evaluation, | ||
determining if a process is too far from the ground-truth." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse minimum merger path length in nm" | ||
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | ||
tooltip="Minimum ground truth path length of a merger component | ||
to be counted as a relevant merger (for sparse evaluation). | ||
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | ||
estimation makes sure no relevant mergers are ignored." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="800" /> | ||
</Form.Item> | ||
<Form.Item name="useAnnotation" initialValue={true} /> | ||
</Col> | ||
</Row> | ||
), | ||
}, | ||
]} | ||
activeKey={isActive ? "evaluation" : []} | ||
/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential issue with Collapse
component's onChange
handler
The onChange
handler for the Collapse
component receives the active key(s) as arguments. Currently, the code toggles isActive
using !isActive
without considering these arguments, which might lead to synchronization issues between the Collapse
state and the isActive
state. It is recommended to use the provided key
parameter to update the isActive
state accurately.
Apply this diff to improve the implementation:
<Collapse
style={{ marginBottom: 8 }}
- onChange={() => setActive(!isActive)}
+ onChange={(key) => setActive(key.includes("evaluation"))}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
<Collapse | |
style={{ marginBottom: 8 }} | |
onChange={() => setActive(!isActive)} | |
expandIcon={() => <Checkbox checked={isActive} />} | |
items={[ | |
{ | |
key: "evaluation", | |
label: "Evaluation Settings", | |
children: ( | |
<Row> | |
<Col style={{ width: "100%" }}> | |
<Form.Item | |
label="Use sparse ground truth tracing" | |
name={["evaluationSettings", "useSparseTracing"]} | |
valuePropName="checked" | |
initialValue={false} | |
tooltip="The evaluation mode can either be `dense` | |
in case all processes in the volume are annotated in the ground-truth. | |
If not, use the `sparse` mode." | |
> | |
<Checkbox style={{ width: "100%" }} /> | |
</Form.Item> | |
<Form.Item | |
label="Max edge length in nm" | |
name={["evaluationSettings", "maxEdgeLength"]} | |
tooltip="Ground truth tracings can be densified so that | |
nodes are at most max_edge_length nm apart. | |
However, this can also introduce wrong nodes in curved processes." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="None" /> | |
</Form.Item> | |
<Form.Item | |
label="Sparse tube threshold in nm" | |
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | |
tooltip="Tube threshold for sparse evaluation, | |
determining if a process is too far from the ground-truth." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | |
</Form.Item> | |
<Form.Item | |
label="Sparse minimum merger path length in nm" | |
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | |
tooltip="Minimum ground truth path length of a merger component | |
to be counted as a relevant merger (for sparse evaluation). | |
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | |
estimation makes sure no relevant mergers are ignored." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="800" /> | |
</Form.Item> | |
<Form.Item name="useAnnotation" initialValue={true} /> | |
</Col> | |
</Row> | |
), | |
}, | |
]} | |
activeKey={isActive ? "evaluation" : []} | |
/> | |
<Collapse | |
style={{ marginBottom: 8 }} | |
onChange={(key) => setActive(key.includes("evaluation"))} | |
expandIcon={() => <Checkbox checked={isActive} />} | |
items={[ | |
{ | |
key: "evaluation", | |
label: "Evaluation Settings", | |
children: ( | |
<Row> | |
<Col style={{ width: "100%" }}> | |
<Form.Item | |
label="Use sparse ground truth tracing" | |
name={["evaluationSettings", "useSparseTracing"]} | |
valuePropName="checked" | |
initialValue={false} | |
tooltip="The evaluation mode can either be `dense` | |
in case all processes in the volume are annotated in the ground-truth. | |
If not, use the `sparse` mode." | |
> | |
<Checkbox style={{ width: "100%" }} /> | |
</Form.Item> | |
<Form.Item | |
label="Max edge length in nm" | |
name={["evaluationSettings", "maxEdgeLength"]} | |
tooltip="Ground truth tracings can be densified so that | |
nodes are at most max_edge_length nm apart. | |
However, this can also introduce wrong nodes in curved processes." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="None" /> | |
</Form.Item> | |
<Form.Item | |
label="Sparse tube threshold in nm" | |
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | |
tooltip="Tube threshold for sparse evaluation, | |
determining if a process is too far from the ground-truth." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | |
</Form.Item> | |
<Form.Item | |
label="Sparse minimum merger path length in nm" | |
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | |
tooltip="Minimum ground truth path length of a merger component | |
to be counted as a relevant merger (for sparse evaluation). | |
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | |
estimation makes sure no relevant mergers are ignored." | |
> | |
<InputNumber style={{ width: "100%" }} placeholder="800" /> | |
</Form.Item> | |
<Form.Item name="useAnnotation" initialValue={true} /> | |
</Col> | |
</Row> | |
), | |
}, | |
]} | |
activeKey={isActive ? "evaluation" : []} | |
/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont see the issue here.
I'd ignore that @cdfhalle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
…com:scalableminds/webknossos into evaluate-segmentation-in-infer-neurons-task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
CHANGELOG.unreleased.md (1)
Line range hint
31-31
: Document API changes in Breaking Changes sectionThe PR includes significant API changes that should be documented in the Breaking Changes section:
### Breaking Changes +- The `JobController.runInferNeuronsJob` method signature has been updated to include additional parameters for evaluation settings. +- The routing configuration for the `inferNeurons` endpoint has been modified to support new evaluation parameters. +- Frontend API changes: The `startNeuronInferralJob` function now requires additional parameters for evaluation configuration.
@@ -11,6 +11,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released | |||
[Commits](https://github.com/scalableminds/webknossos/compare/24.11.1...HEAD) | |||
|
|||
### Added | |||
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix typo and enhance the changelog entry
- Fix the typo: "inferrence" should be "inference"
- The entry should be expanded to better reflect the scope of changes:
-- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
+- It is now possible to start a split-merger evaluation when starting a neuron inference. The neuron segmentation tab now includes an evaluation settings form that allows configuring parameters for the evaluate-split-merger task. The evaluation runs immediately after inference, and error annotations are uploaded to the dataset. [#8221](https://github.com/scalableminds/webknossos/pull/8221)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221) | |
- It is now possible to start a split-merger evaluation when starting a neuron inference. The neuron segmentation tab now includes an evaluation settings form that allows configuring parameters for the evaluate-split-merger task. The evaluation runs immediately after inference, and error annotations are uploaded to the dataset. [#8221](https://github.com/scalableminds/webknossos/pull/8221) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say there is no need to such a details explanation. But the message has a little typo :)
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221) | |
- It is now possible to start a split-merger evaluation when starting a neuron inference. [#8221](https://github.com/scalableminds/webknossos/pull/8221) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer, understood! Thank you for the feedback. I'll keep the changelog entries concise.
✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: CHANGELOG.unreleased.md:14-14
Timestamp: 2024-11-25T14:52:23.036Z
Learning: For the `webknossos` project, changelog entries in `CHANGELOG.unreleased.md` should be concise and avoid unnecessary detailed explanations.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Work. Thanks a lot for your PR @cdfhalle
Especially how you integrated the new UI 👍
Testing went well 💪 but I would like to have a few minor points addressed. The main points are:
- Please rename almost all variable with evaluation in their name to include that the evaluation is a split-merger evaluation. E.g.
doEvaluation
->doSplitMergerEvaluation
. This makes it easier to understand what evaluation is meant, as maybe there might be other evaluations to other jobs or even potentially to the same job. - Please ensure in case an evaluation should be done, that the backend validates that the required parameters are given (not None) / sent by the client and not that they are empty options (None).
- The rest is minor stuff I'd say :)
@@ -226,7 +226,13 @@ class JobController @Inject()( | |||
datasetName: String, | |||
layerName: String, | |||
bbox: String, | |||
newDatasetName: String): Action[AnyContent] = | |||
newDatasetName: String, | |||
doEvaluation: Boolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't know what kind of evaluation would be performed in case doEvaluation
is set to true. (As I do not have work with voxelytics regularly) Therefore, I would prefer to make the name more explicit
doEvaluation: Boolean, | |
doSplitMergerEvaluation: Boolean, |
@@ -248,6 +254,12 @@ class JobController @Inject()( | |||
"new_dataset_name" -> newDatasetName, | |||
"layer_name" -> layerName, | |||
"bbox" -> bbox, | |||
"do_evaluation" -> doEvaluation, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same renaming here
"do_evaluation" -> doEvaluation, | |
"do_split_merger_evaluation" -> doSplitMergerEvaluation, |
@@ -264,7 +264,7 @@ POST /jobs/run/computeMeshFile/:organizationId/:datasetName | |||
POST /jobs/run/computeSegmentIndexFile/:organizationId/:datasetName controllers.JobController.runComputeSegmentIndexFileJob(organizationId: String, datasetName: String, layerName: String) | |||
POST /jobs/run/exportTiff/:organizationId/:datasetName controllers.JobController.runExportTiffJob(organizationId: String, datasetName: String, bbox: String, additionalCoordinates: Option[String], layerName: Option[String], mag: Option[String], annotationLayerName: Option[String], annotationId: Option[String], asOmeTiff: Boolean) | |||
POST /jobs/run/inferNuclei/:organizationId/:datasetName controllers.JobController.runInferNucleiJob(organizationId: String, datasetName: String, layerName: String, newDatasetName: String) | |||
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String) | |||
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable renaming from above and adding spaces between param definition
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String]) | |
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doSplitMergerEvaluation: Boolean, annotationId: Option[String], evalUseSparseTracing: Option[Boolean], evalMaxEdgeLength: Option[String], evalSparseTubeThresholdNm: Option[String], evalMinMergerPathLengthNm: Option[String]) |
@@ -193,12 +193,26 @@ export function startNeuronInferralJob( | |||
layerName: string, | |||
bbox: Vector6, | |||
newDatasetName: string, | |||
doEvaluation: boolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doEvaluation: boolean, | |
doSplitMergerEvaluation: boolean, |
): Promise<APIJob> { | ||
const urlParams = new URLSearchParams({ | ||
layerName, | ||
bbox: bbox.join(","), | ||
newDatasetName, | ||
doEvaluation: doEvaluation.toString(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doEvaluation: doEvaluation.toString(), | |
doSplitMergerEvaluation: doSplitMergerEvaluation.toString(), |
And so on 🙈
<Collapse | ||
style={{ marginBottom: 8 }} | ||
onChange={() => setActive(!isActive)} | ||
expandIcon={() => <Checkbox checked={isActive} />} | ||
items={[ | ||
{ | ||
key: "evaluation", | ||
label: "Evaluation Settings", | ||
children: ( | ||
<Row> | ||
<Col style={{ width: "100%" }}> | ||
<Form.Item | ||
label="Use sparse ground truth tracing" | ||
name={["evaluationSettings", "useSparseTracing"]} | ||
valuePropName="checked" | ||
initialValue={false} | ||
tooltip="The evaluation mode can either be `dense` | ||
in case all processes in the volume are annotated in the ground-truth. | ||
If not, use the `sparse` mode." | ||
> | ||
<Checkbox style={{ width: "100%" }} /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Max edge length in nm" | ||
name={["evaluationSettings", "maxEdgeLength"]} | ||
tooltip="Ground truth tracings can be densified so that | ||
nodes are at most max_edge_length nm apart. | ||
However, this can also introduce wrong nodes in curved processes." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="None" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse tube threshold in nm" | ||
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | ||
tooltip="Tube threshold for sparse evaluation, | ||
determining if a process is too far from the ground-truth." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse minimum merger path length in nm" | ||
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | ||
tooltip="Minimum ground truth path length of a merger component | ||
to be counted as a relevant merger (for sparse evaluation). | ||
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | ||
estimation makes sure no relevant mergers are ignored." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="800" /> | ||
</Form.Item> | ||
<Form.Item name="useAnnotation" initialValue={true} /> | ||
</Col> | ||
</Row> | ||
), | ||
}, | ||
]} | ||
activeKey={isActive ? "evaluation" : []} | ||
/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont see the issue here.
I'd ignore that @cdfhalle
function CollapsibleEvaluationSettings({ | ||
isActive = false, | ||
setActive, | ||
}: { isActive: boolean; setActive: (active: boolean) => void }) { | ||
return ( | ||
<Collapse | ||
style={{ marginBottom: 8 }} | ||
onChange={() => setActive(!isActive)} | ||
expandIcon={() => <Checkbox checked={isActive} />} | ||
items={[ | ||
{ | ||
key: "evaluation", | ||
label: "Evaluation Settings", | ||
children: ( | ||
<Row> | ||
<Col style={{ width: "100%" }}> | ||
<Form.Item | ||
label="Use sparse ground truth tracing" | ||
name={["evaluationSettings", "useSparseTracing"]} | ||
valuePropName="checked" | ||
initialValue={false} | ||
tooltip="The evaluation mode can either be `dense` | ||
in case all processes in the volume are annotated in the ground-truth. | ||
If not, use the `sparse` mode." | ||
> | ||
<Checkbox style={{ width: "100%" }} /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Max edge length in nm" | ||
name={["evaluationSettings", "maxEdgeLength"]} | ||
tooltip="Ground truth tracings can be densified so that | ||
nodes are at most max_edge_length nm apart. | ||
However, this can also introduce wrong nodes in curved processes." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="None" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse tube threshold in nm" | ||
name={["evaluationSettings", "sparseTubeThresholdInNm"]} | ||
tooltip="Tube threshold for sparse evaluation, | ||
determining if a process is too far from the ground-truth." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="1000" /> | ||
</Form.Item> | ||
<Form.Item | ||
label="Sparse minimum merger path length in nm" | ||
name={["evaluationSettings", "minimumMergerPathLengthInNm"]} | ||
tooltip="Minimum ground truth path length of a merger component | ||
to be counted as a relevant merger (for sparse evaluation). | ||
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length | ||
estimation makes sure no relevant mergers are ignored." | ||
> | ||
<InputNumber style={{ width: "100%" }} placeholder="800" /> | ||
</Form.Item> | ||
<Form.Item name="useAnnotation" initialValue={true} /> | ||
</Col> | ||
</Row> | ||
), | ||
}, | ||
]} | ||
activeKey={isActive ? "evaluation" : []} | ||
/> | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say to also ignore that @cdfhalle
@@ -11,6 +11,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released | |||
[Commits](https://github.com/scalableminds/webknossos/compare/24.11.1...HEAD) | |||
|
|||
### Added | |||
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say there is no need to such a details explanation. But the message has a little typo :)
- It is now possible to start a split-merger evaluation when starting a neuron inferrence. [#8221](https://github.com/scalableminds/webknossos/pull/8221) | |
- It is now possible to start a split-merger evaluation when starting a neuron inference. [#8221](https://github.com/scalableminds/webknossos/pull/8221) |
jobSpecificInputFields={ | ||
<CollapsibleEvaluationSettings isActive={useEvaluation} setActive={setUseEvaluation} /> | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just guessing here: The split merger evaluation requires an annotation to work, doesn't it? And more specific a skeleton annotation is required, isn't it?
If that's so please only set the jobSpecificInputFields
to CollapsibleEvaluationSettings
if the user currently has an opened annotation with a skeleton annotation.
You can retrieve whether there is a skeleton tracing via:
const hasSkeletonAnnotation = useSelector((state: OxalisState) => state.tracing.skeleton != null);
children: ( | ||
<Row> | ||
<Col style={{ width: "100%" }}> | ||
<Form.Item |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This from item with the check box looks a little quirky in vertical mode. Could you please make it horizontal?
<Form.Item
...
layout="horizontal"
/>
Steps to test:
TODOs:
Issues:
(Please delete unneeded items, merge only when none are left open)
Summary by CodeRabbit
Release Notes
New Features
Improvements
Bug Fixes