Workflows

What is a Workflow?
355 Workflows visible to you, out of a total of 386
Stable

Galaxy Workflow created on Galaxy-E european instance, ecology.usegalaxy.eu, related to the Galaxy training tutorial "Sentinel 2 biodiversity" .

This workflow allows to analyze remote sensing sentinel 2 satellites data to compute spectral indices such as the NDVI and visualizing biodiversity indicators

Type: Galaxy

Creators: Yvan Le Bras, Coline Royaux, Marie Jossé

Submitter: Yvan Le Bras

Stable

Galaxy Workflow created on Galaxy-E european instance, ecology.usegalaxy.eu, related to the Galaxy training tutorial "Biodiversity data exploration"

This workflow allows to explore biodiversity data looking at homoscedasticity, normality or collinearity of presences-absence or abundance data and at comparing beta diversity taking into account space, time and species components ...

Type: Galaxy

Creators: Yvan Le Bras, Coline Royaux, Marie Jossé

Submitter: Yvan Le Bras

DOI: 10.48546/workflowhub.workflow.656.1

Stable

Galaxy Workflow created on Galaxy-E european instance, ecology.usegalaxy.eu, related to the Galaxy training tutorial "Metabarcoding/eDNA through Obitools" .

This workflow allows to analyze DNA metabarcoding / eDNA data produced on Illumina sequencers using the OBITools.

Work-in-progress

Autosubmit mHM test domains

Type: Autosubmit

Creator: Bruno P. Kinoshita

Submitter: Bruno P. Kinoshita

MMV Im2Im Transformation

Build Status

A generic python package for deep learning based image-to-image transformation in biomedical applications

The main branch will be further developed in order to be able to use the latest state of the art techniques and methods in the future. To reproduce the results of our manuscript, we refer to the branch ...

Type: Python

Creator: Justin Sonneck

Submitter: Justin Sonneck

DOI: 10.48546/workflowhub.workflow.626.1

Work-in-progress

rquest-omop-worker-workflows

Source for workflow definitions for the open source RQuest OMOP Worker tool developed for Hutch/TRE-FX

Note: ARM workflows are currently broken. x86 ones work.

Inputs

### Body Sample input payload:

{ 
"task_id": "job-2023-01-13-14: 20: 38-", 
"project": "", 
"owner": "", 
"cohort": { 
"groups": [ 
{ 
"rules": [ 
{ 
"varname": "OMOP", 
"varcat": "Person", 
"type": "TEXT", 
"oper": "=", 
"value": "8507" 
} 
], 
"rules_oper": "AND" 
} 
], 
"groups_oper": "OR" 
}, 
"collection":
...
Stable

Summary

The data preparation pipeline contains tasks for two distinct scenarios: leukaemia that contains microarray data for 119 patients and ovarian cancer that contains next generation sequencing data for 380 patients.

The disease outcome prediction pipeline offers two strategies for this task:

Graph kernel method: It starts generating personalized networks for ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

This pipeline contains the following functions: (1) Data processing to handle the tansformations needed to obtain the original pathway scores of the samples according to single sample analysis GSEA (2) Model training based on the disease and healthy sample pathway scores, to classify them (3) Scoring matrix weights optimization according to a gold standard list of drugs (those that went on clinical trials or are approved for the disease).It tests the weights in a range of 0 to 30 (you ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

The PPI information aggregation pipeline starts getting all the datasets in GEO database whose material was generated using expression profiling by high throughput sequencing. From each database identifiers, it extracts the supplementary files that had the counts table. Once finishing the download step, it identifies those that were normalized or had the raw counts to normalize. It also identify and map the gene ids to uniprot (the ids found usually ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

This pipeline has as major goal provide a tool for protein interactions (PPI) prediction data formalization and standardization using the OntoPPI ontology. This pipeline is splitted in two parts: (i) a part to prepare data from three main sources of PPI data (HINT, STRING and PredPrin) and create the standard files to be processed ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Powered by
(v.1.17.0-main)
Copyright © 2008 - 2025 The University of Manchester and HITS gGmbH