Skip to content

Commit

Permalink
Fix confusion with laning in article
Browse files Browse the repository at this point in the history
  • Loading branch information
ogxd committed Nov 7, 2023
1 parent 8c9c23e commit d896585
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
Binary file modified article/article.pdf
Binary file not shown.
5 changes: 2 additions & 3 deletions article/article.tex
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ \subsection{The Temporal Construction}
\subsubsection{Intermediate Hashes}

Let's define \( n_g = \lfloor {n_b}/{k_b} \rfloor \) as the number of whole groups of \( k_b \) message blocks. \\
For each lane we compute an intermediate hash, \( H_i \), as follows:
For each group we compute an intermediate hash, \( H_i \), as follows:

\begin{align*}
H_{1} &= f(\ldots f(f(M_1, M_2), M_3)\ldots, M_{k_b}), \\
Expand Down Expand Up @@ -660,7 +660,7 @@ \subsubsection{Portability}
As previously stated, portability between different state sizes was not a design goal for GxHash. The algorithm functions across a variety of platforms, such as x86 and ARM, with consistent results within the same state size implementation. However, it is important to note that the 128-bit and 256-bit state versions of GxHash will produce distinct hashes for the same input. This discrepancy means that hashes generated by one state size are not directly comparable to those generated by another. To prevent any potential confusion, it is recommended that users choose one state size and maintain consistency throughout the hashing process, especially when hashes may need to be compared or persisted. This limitation is deemed acceptable for use-cases such as in-process hash tables where the state size does not change. Developing a version of GxHash that harmonizes the outputs of different state sizes would involve trade-offs, likely affecting the algorithm’s performance or complexity.

\subsubsection{Compiler Dependencies}
The Temporal Construction presented in this paper is implemented for GxHash by declaring each lane with its own variable. While it worked at the time of writing (rustc 1.68.0), it is in the end the compiler's responsibility to decide how many registers to use. We cannot exclude that in another context (different version, different language/compiler, ...) the compiler will undo the ILP we tried to implicitly introduce. This could be countered by writing the algorithm directly in assembly code, at the cost of complexity.
The Temporal Construction presented in this paper is implemented for GxHash by declaring at temporary variable on each group iteration, assuming that the compilers understands that most of each iteration can be parallelized. While it worked at the time of writing (rustc 1.68.0), it is in the end the compiler's responsibility to decide how many registers to use. We cannot exclude that in another context (different version, different language/compiler, ...) the compiler will undo the ILP we tried to implicitly introduce. This could be countered by writing the algorithm directly in assembly code, at the cost of complexity.

\subsection{Future Work}
Despite the outstanding benchmark results, we think there are still many possible paths for research and improvement. Here is a non-exhaustive list:
Expand All @@ -670,7 +670,6 @@ \subsection{Future Work}
\item Benchmark in more test suite to further assess the distribution and avalanche properties.
\item Analyze security properties.
\item Rewrite the algorithm in assembly code or a language that is more explicit about registers.
\item Introduce more than one stage of laning. For instance 16 lanes, then 8 lanes, then 4 lanes, and finally 2 lanes, to leverage ILP as much as possible.
\item Organize processing loop to result in even less bytecode to be generated, favoring inlining opportunities.
\item Fine-tune the finalization stage to find the perfect balance between performance and avalanche effect.
\end{itemize}
Expand Down

0 comments on commit d896585

Please sign in to comment.