Skip to content

Commit

Permalink
repo-sync-2024-08-26T11:04:20+0800 (#834)
Browse files Browse the repository at this point in the history
# Pull Request

## What problem does this PR solve?

Issue Number: Fixed #

## Possible side effects?

- Performance:

- Backward compatibility:
  • Loading branch information
anakinxc authored Aug 26, 2024
1 parent f17d37f commit 6564c08
Show file tree
Hide file tree
Showing 3 changed files with 87 additions and 99 deletions.
5 changes: 2 additions & 3 deletions libspu/dialect/pphlo/transforms/hlo_legalize_to_pphlo.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1263,10 +1263,9 @@ class HloToPPHloOpConverter<stablehlo::DynamicUpdateSliceOp>
Type result_type = typetools_.getType(
this->getTypeConverter()->convertType(op.getType()), result_vis);

auto materialized = materializeInputs(op, op->getOperands());
auto materialized = materializeInputs(op, adaptor.getOperands());
rewriter.replaceOpWithNewOp<pphlo::DynamicUpdateSliceOp>(
op, result_type, materialized[0], materialized[1],
adaptor.getStartIndices());
op, TypeRange{result_type}, materialized);

return success();
}
Expand Down
18 changes: 9 additions & 9 deletions sml/preprocessing/preprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ def fit(self, X, *, zero_variance=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
"""
self._reset()
Expand All @@ -364,7 +364,7 @@ def partial_fit(self, X, *, zero_variance=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
"""
feature_range = self.feature_range
Expand Down Expand Up @@ -434,7 +434,7 @@ def fit_transform(self, X, *, zero_variance=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
Returns
Expand Down Expand Up @@ -495,7 +495,7 @@ def fit(self, X, zero_maxabs=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
"""
self._reset()
Expand All @@ -519,7 +519,7 @@ def partial_fit(self, X, *, zero_maxabs=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
"""
first_pass = not hasattr(self, "n_samples_seen_")
Expand Down Expand Up @@ -569,7 +569,7 @@ def fit_transform(self, X, *, zero_maxabs=False, contain_nan=False):
contain_nan : bool, default=False
Set to True to handle the nan value.
This option desiced whether to use nanmin and nanmax to compute the minimum
This option decides whether to use nanmin and nanmax to compute the minimum
and maximum.
Returns
Expand Down Expand Up @@ -687,9 +687,9 @@ def loop_body(i, st):
class KBinsDiscretizer:
"""Bin continuous data into intervals.
Attribute encode is not implemented, since there is currently no onehotencoder
Attribute encode is not implemented, since there is currently no OneHotEncoder
in sml.
Attribute subsample is not implemented, since random choise in SPU runtime does
Attribute subsample is not implemented, since random choice in SPU runtime does
not work as expected.
Parameters
Expand Down Expand Up @@ -744,7 +744,7 @@ def fit(
Note that there is currently no support for handling constant values in a feature
, since it introduces much redundant boolean computation because dynamic shape is
not supported.
In sklarn, feature with constant value will be replaced with 0 after transformation.
In sklearn, feature with constant value will be replaced with 0 after transformation.
(see https://github.com/scikit-learn/scikit-learn/blob/d139ff234b0f8ec30287e26e0bc801bdafdfbb1a/sklearn/preprocessing/tests/test_discretization.py#L192)
Parameters
Expand Down
Loading

0 comments on commit 6564c08

Please sign in to comment.