Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Important! Template update for nf-core/tools v2.10 #3

Merged
merged 4 commits into from
Sep 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
"name": "nfcore",
"image": "nfcore/gitpod:latest",
"remoteUser": "gitpod",
"runArgs": ["--privileged"],

// Configure tool-specific properties.
"customizations": {
Expand Down
4 changes: 3 additions & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ Please use the pre-filled template to save time.
However, don't be put off by this template - other more general issues and suggestions are welcome!
Contributions to the code are even more welcome ;)

> If you need help using or modifying nf-core/molkart then the best place to ask is on the nf-core Slack [#molkart](https://nfcore.slack.com/channels/molkart) channel ([join our Slack here](https://nf-co.re/join/slack)).
:::info
If you need help using or modifying nf-core/molkart then the best place to ask is on the nf-core Slack [#molkart](https://nfcore.slack.com/channels/molkart) channel ([join our Slack here](https://nf-co.re/join/slack)).
:::

## Contribution workflow

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/linting.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ jobs:

- uses: actions/setup-python@v4
with:
python-version: "3.8"
python-version: "3.11"
architecture: "x64"

- name: Install dependencies
Expand Down
68 changes: 68 additions & 0 deletions .github/workflows/release-announcments.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
name: release-announcements
# Automatic release toot and tweet anouncements
on:
release:
types: [published]
workflow_dispatch:

jobs:
toot:
runs-on: ubuntu-latest
steps:
- uses: rzr/fediverse-action@master
with:
access-token: ${{ secrets.MASTODON_ACCESS_TOKEN }}
host: "mstdn.science" # custom host if not "mastodon.social" (default)
# GitHub event payload
# https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#release
message: |
Pipeline release! ${{ github.repository }} v${{ github.event.release.tag_name }} - ${{ github.event.release.name }}!

Please see the changelog: ${{ github.event.release.html_url }}

send-tweet:
runs-on: ubuntu-latest

steps:
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install dependencies
run: pip install tweepy==4.14.0
- name: Send tweet
shell: python
run: |
import os
import tweepy

client = tweepy.Client(
access_token=os.getenv("TWITTER_ACCESS_TOKEN"),
access_token_secret=os.getenv("TWITTER_ACCESS_TOKEN_SECRET"),
consumer_key=os.getenv("TWITTER_CONSUMER_KEY"),
consumer_secret=os.getenv("TWITTER_CONSUMER_SECRET"),
)
tweet = os.getenv("TWEET")
client.create_tweet(text=tweet)
env:
TWEET: |
Pipeline release! ${{ github.repository }} v${{ github.event.release.tag_name }} - ${{ github.event.release.name }}!

Please see the changelog: ${{ github.event.release.html_url }}
TWITTER_CONSUMER_KEY: ${{ secrets.TWITTER_CONSUMER_KEY }}
TWITTER_CONSUMER_SECRET: ${{ secrets.TWITTER_CONSUMER_SECRET }}
TWITTER_ACCESS_TOKEN: ${{ secrets.TWITTER_ACCESS_TOKEN }}
TWITTER_ACCESS_TOKEN_SECRET: ${{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}

bsky-post:
runs-on: ubuntu-latest
steps:
- uses: zentered/[email protected]
with:
post: |
Pipeline release! ${{ github.repository }} v${{ github.event.release.tag_name }} - ${{ github.event.release.name }}!

Please see the changelog: ${{ github.event.release.html_url }}
env:
BSKY_IDENTIFIER: ${{ secrets.BSKY_IDENTIFIER }}
BSKY_PASSWORD: ${{ secrets.BSKY_PASSWORD }}
#
2 changes: 1 addition & 1 deletion CITATIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

- [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)

> Andrews, S. (2010). FastQC: A Quality Control Tool for High Throughput Sequence Data [Online]. Available online https://www.bioinformatics.babraham.ac.uk/projects/fastqc/.
> Andrews, S. (2010). FastQC: A Quality Control Tool for High Throughput Sequence Data [Online].

- [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/)

Expand Down
133 changes: 102 additions & 31 deletions CODE_OF_CONDUCT.md

Large diffs are not rendered by default.

25 changes: 14 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# ![nf-core/molkart](docs/images/nf-core-molkart_logo_light.png#gh-light-mode-only) ![nf-core/molkart](docs/images/nf-core-molkart_logo_dark.png#gh-dark-mode-only)

[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/molkart/results)[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.XXXXXXX-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.XXXXXXX)
[![GitHub Actions CI Status](https://github.com/nf-core/molkart/workflows/nf-core%20CI/badge.svg)](https://github.com/nf-core/molkart/actions?query=workflow%3A%22nf-core+CI%22)
[![GitHub Actions Linting Status](https://github.com/nf-core/molkart/workflows/nf-core%20linting/badge.svg)](https://github.com/nf-core/molkart/actions?query=workflow%3A%22nf-core+linting%22)[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/molkart/results)[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.XXXXXXX-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.XXXXXXX)

[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A523.04.0-23aa62.svg)](https://www.nextflow.io/)
[![run with conda](http://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/)
Expand All @@ -11,12 +12,12 @@
[![Get help on Slack](http://img.shields.io/badge/slack-nf--core%20%23molkart-4A154B?labelColor=000000&logo=slack)](https://nfcore.slack.com/channels/molkart)[![Follow on Twitter](http://img.shields.io/badge/twitter-%40nf__core-1DA1F2?labelColor=000000&logo=twitter)](https://twitter.com/nf_core)[![Follow on Mastodon](https://img.shields.io/badge/mastodon-nf__core-6364ff?labelColor=FFFFFF&logo=mastodon)](https://mstdn.science/@nf_core)[![Watch on YouTube](http://img.shields.io/badge/youtube-nf--core-FF0000?labelColor=000000&logo=youtube)](https://www.youtube.com/c/nf-core)

## Important note

This pipeline is under active development and does not work at this point. If you want to process your Resolve Bioscience data in the meantime, please go to https://github.com/FloWuenne/nf-MolKart, where you can find a working nextflow pipeline, outside the nf-core ecosystem. In the coming weeks, nf-MolKart will be completely transitioned into nf-core/molkart for future use.

## Introduction

**nf-core/molkart** is a pipeline for processing Molecular Cartography data from Resolve Bioscience (combinatorial FISH). It takes as input a table of FISH spot positions (x,y,z,gene), a corresponding DAPI image (tiff format) and optionally a membrane staining image in tiff format. nf-core/molkart performs end-to-end processing of the data including image processing, QC filtering of spots, cell segmentation, spot to cell assignment and reports quality metrics such as the spot assignment rate, average spots per cell and segmentation mask size ranges.

**nf-core/molkart** is a pipeline for processing Molecular Cartography data from Resolve Bioscience (combinatorial FISH). It takes as input a table of FISH spot positions (x,y,z,gene), a corresponding DAPI image (tiff format) and optionally a membrane staining image in tiff format. nf-core/molkart performs end-to-end processing of the data including image processing, QC filtering of spots, cell segmentation, spot to cell assignment and reports quality metrics such as the spot assignment rate, average spots per cell and segmentation mask size ranges.

<!-- TODO nf-core: Include a figure that guides the user through the major workflow steps. Many nf-core
workflows use the "tube map" design for that. See https://nf-co.re/docs/contributing/design_guidelines#examples for examples. -->
Expand All @@ -27,10 +28,11 @@ This pipeline is under active development and does not work at this point. If yo

## Usage

> **Note**
> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how
> to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline)
> with `-profile test` before running the workflow on actual data.
:::note
If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how
to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline)
with `-profile test` before running the workflow on actual data.
:::

<!-- TODO nf-core: Describe the minimum required steps to execute the pipeline, e.g. how to prepare samplesheets.
Explain what rows and columns represent. For instance (please edit as appropriate):
Expand Down Expand Up @@ -59,10 +61,11 @@ nextflow run nf-core/molkart \
--outdir <OUTDIR>
```

> **Warning:**
> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those
> provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
> see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
:::warning
Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those
provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
:::

For more details and further functionality, please refer to the [usage documentation](https://nf-co.re/molkart/usage) and the [parameter documentation](https://nf-co.re/molkart/parameters).

Expand Down
4 changes: 2 additions & 2 deletions assets/multiqc_config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
report_comment: >
This report has been generated by the <a href="https://github.com/nf-core/molkart/1.0dev" target="_blank">nf-core/molkart</a>
This report has been generated by the <a href="https://github.com/nf-core/molkart/releases/tag/dev" target="_blank">nf-core/molkart</a>
analysis pipeline. For information about how to interpret these results, please see the
<a href="https://nf-co.re/molkart/1.0dev/output" target="_blank">documentation</a>.
<a href="https://nf-co.re/molkart/dev/docs/output" target="_blank">documentation</a>.
report_section_order:
"nf-core-molkart-methods-description":
order: -1000
Expand Down
80 changes: 43 additions & 37 deletions bin/apply_clahe.dask.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@
from os.path import abspath
from argparse import ArgumentParser as AP
import time
#from memory_profiler import profile

# from memory_profiler import profile
# This API is apparently changing in skimage 1.0 but it's not clear to
# me what the replacement will be, if any. We'll explicitly import
# this so it will break loudly if someone tries this with skimage 1.0.
Expand All @@ -34,7 +35,7 @@

def get_args():
# Script description
description="""Easy-to-use, large scale CLAHE"""
description = """Easy-to-use, large scale CLAHE"""

# Add parser
parser = AP(description=description, formatter_class=argparse.RawDescriptionHelpFormatter)
Expand All @@ -43,10 +44,16 @@ def get_args():
inputs = parser.add_argument_group(title="Required Input", description="Path to required input file")
inputs.add_argument("-r", "--raw", dest="raw", action="store", required=True, help="File path to input image.")
inputs.add_argument("-o", "--output", dest="output", action="store", required=True, help="Path to output image.")
inputs.add_argument("-c", "--channel", dest="channel", action="store", required=True, help="Channel on which CLAHE will be applied")
inputs.add_argument(
"-c", "--channel", dest="channel", action="store", required=True, help="Channel on which CLAHE will be applied"
)
inputs.add_argument("-l", "--cliplimit", dest="clip", action="store", required=True, help="Clip Limit for CLAHE")
inputs.add_argument("--kernel", dest="kernel", action="store", required=False, default=None, help="Kernel size for CLAHE")
inputs.add_argument("-g", "--nbins", dest="nbins", action="store", required=False, default=256, help="Number of bins for CLAHE")
inputs.add_argument(
"--kernel", dest="kernel", action="store", required=False, default=None, help="Kernel size for CLAHE"
)
inputs.add_argument(
"-g", "--nbins", dest="nbins", action="store", required=False, default=256, help="Number of bins for CLAHE"
)
inputs.add_argument("-p", "--pixel-size", dest="pixel_size", action="store", required=True, help="Image pixel size")

arg = parser.parse_args()
Expand All @@ -61,54 +68,58 @@ def get_args():

return arg


def preduce(coords, img_in, img_out):
print(img_in.dtype)
(iy1, ix1), (iy2, ix2) = coords
(oy1, ox1), (oy2, ox2) = np.array(coords) // 2
tile = skimage.img_as_float32(img_in[iy1:iy2, ix1:ix2])
tile = skimage.transform.downscale_local_mean(tile, (2, 2))
tile = dtype_convert(tile, 'uint16')
#tile = dtype_convert(tile, img_in.dtype)
tile = dtype_convert(tile, "uint16")
# tile = dtype_convert(tile, img_in.dtype)
img_out[oy1:oy2, ox1:ox2] = tile


def format_shape(shape):
return "%dx%d" % (shape[1], shape[0])


def subres_tiles(level, level_full_shapes, tile_shapes, outpath, scale):
print(f"\n processing level {level}")
assert level >= 1
num_channels, h, w = level_full_shapes[level]
tshape = tile_shapes[level] or (h, w)
tiff = tifffile.TiffFile(outpath)
zimg = zarr.open(tiff.aszarr(series=0, level=level-1, squeeze=False))
zimg = zarr.open(tiff.aszarr(series=0, level=level - 1, squeeze=False))
for c in range(num_channels):
sys.stdout.write(
f"\r processing channel {c + 1}/{num_channels}"
)
sys.stdout.write(f"\r processing channel {c + 1}/{num_channels}")
sys.stdout.flush()
th = tshape[0] * scale
tw = tshape[1] * scale
for y in range(0, zimg.shape[1], th):
for x in range(0, zimg.shape[2], tw):
a = zimg[c, y:y+th, x:x+tw, 0]
a = skimage.transform.downscale_local_mean(
a, (scale, scale)
)
a = zimg[c, y : y + th, x : x + tw, 0]
a = skimage.transform.downscale_local_mean(a, (scale, scale))
if np.issubdtype(zimg.dtype, np.integer):
a = np.around(a)
a = a.astype('uint16')
a = a.astype("uint16")
yield a


def main(args):
print(f"Head directory = {args.raw}")
print(f"Channel = {args.channel}, ClipLimit = {args.clip}, nbins = {args.nbins}, kernel_size = {args.kernel}, pixel_size = {args.pixel_size}")
print(
f"Channel = {args.channel}, ClipLimit = {args.clip}, nbins = {args.nbins}, kernel_size = {args.kernel}, pixel_size = {args.pixel_size}"
)

#clahe = cv2.createCLAHE(clipLimit = int(args.clip), tileGridSize=tuple(map(int, args.grid)))
# clahe = cv2.createCLAHE(clipLimit = int(args.clip), tileGridSize=tuple(map(int, args.grid)))

img_raw = AI.AICSImage(args.raw)
img_dask = img_raw.get_image_dask_data("CYX").astype('uint16')
adapted = img_dask[args.channel].compute()/65535
adapted = (equalize_adapthist(adapted, kernel_size=args.kernel, clip_limit=args.clip, nbins=args.nbins)*65535).astype('uint16')
img_dask = img_raw.get_image_dask_data("CYX").astype("uint16")
adapted = img_dask[args.channel].compute() / 65535
adapted = (
equalize_adapthist(adapted, kernel_size=args.kernel, clip_limit=args.clip, nbins=args.nbins) * 65535
).astype("uint16")
img_dask[args.channel] = adapted

# construct levels
Expand All @@ -121,7 +132,7 @@ def main(args):
num_channels = img_dask.shape[0]
num_levels = (np.ceil(np.log2(max(base_shape) / tile_size)) + 1).astype(int)
factors = 2 ** np.arange(num_levels)
shapes = (np.ceil(np.array(base_shape) / factors[:,None])).astype(int)
shapes = (np.ceil(np.array(base_shape) / factors[:, None])).astype(int)

print("Pyramid level sizes: ")
for i, shape in enumerate(shapes):
Expand All @@ -137,40 +148,35 @@ def main(args):
level_full_shapes.append((num_channels, shape[0], shape[1]))
level_shapes = shapes
tip_level = np.argmax(np.all(level_shapes < tile_size, axis=1))
tile_shapes = [
(tile_size, tile_size) if i <= tip_level else None
for i in range(len(level_shapes))
]
tile_shapes = [(tile_size, tile_size) if i <= tip_level else None for i in range(len(level_shapes))]

# write pyramid
with tifffile.TiffWriter(args.output, ome=True, bigtiff=True) as tiff:
tiff.write(
data = img_dask,
shape = level_full_shapes[0],
subifds=int(num_levels-1),
data=img_dask,
shape=level_full_shapes[0],
subifds=int(num_levels - 1),
dtype=dtype,
resolution=(10000 / pixel_size, 10000 / pixel_size, "centimeter"),
tile=tile_shapes[0]
tile=tile_shapes[0],
)
for level, (shape, tile_shape) in enumerate(
zip(level_full_shapes[1:], tile_shapes[1:]), 1
):
for level, (shape, tile_shape) in enumerate(zip(level_full_shapes[1:], tile_shapes[1:]), 1):
tiff.write(
data = subres_tiles(level, level_full_shapes, tile_shapes, args.output, scale),
data=subres_tiles(level, level_full_shapes, tile_shapes, args.output, scale),
shape=shape,
subfiletype=1,
dtype=dtype,
tile=tile_shape
tile=tile_shape,
)

# note about metadata: the channels, planes etc were adjusted not to include the removed channels, however
# the channel ids have stayed the same as before removal. E.g if channels 1 and 2 are removed,
# the channel ids in the metadata will skip indices 1 and 2 (channel_id:0, channel_id:3, channel_id:4 ...)
#tifffile.tiffcomment(args.output, to_xml(metadata))
# tifffile.tiffcomment(args.output, to_xml(metadata))
print()


if __name__ == '__main__':
if __name__ == "__main__":
# Read in arguments
args = get_args()

Expand Down
Loading
Loading