Skip to main content
Advertisement

Main menu

  • Home
  • Articles
    • Accepted manuscripts
    • Issue in progress
    • Latest complete issue
    • Issue archive
    • Archive by article type
    • Special issues
    • Subject collections
    • Sign up for alerts
  • About us
    • About Development
    • About the Node
    • Editors and Board
    • Editor biographies
    • Travelling Fellowships
    • Grants and funding
    • Journal Meetings
    • Workshops
    • The Company of Biologists
    • Journal news
  • For authors
    • Submit a manuscript
    • Aims and scope
    • Presubmission enquiries
    • Article types
    • Manuscript preparation
    • Cover suggestions
    • Editorial process
    • Promoting your paper
    • Open Access
    • Biology Open transfer
  • Journal info
    • Journal policies
    • Rights and permissions
    • Media policies
    • Reviewer guide
    • Sign up for alerts
  • Contacts
    • Contacts
    • Subscriptions
    • Feedback
  • COB
    • About The Company of Biologists
    • Development
    • Journal of Cell Science
    • Journal of Experimental Biology
    • Disease Models & Mechanisms
    • Biology Open

User menu

  • Log in

Search

  • Advanced search
Development
  • COB
    • About The Company of Biologists
    • Development
    • Journal of Cell Science
    • Journal of Experimental Biology
    • Disease Models & Mechanisms
    • Biology Open

supporting biologistsinspiring biology

Development

  • Log in
Advanced search

RSS  Twitter  Facebook  YouTube 

  • Home
  • Articles
    • Accepted manuscripts
    • Issue in progress
    • Latest complete issue
    • Issue archive
    • Archive by article type
    • Special issues
    • Subject collections
    • Sign up for alerts
  • About us
    • About Development
    • About the Node
    • Editors and Board
    • Editor biographies
    • Travelling Fellowships
    • Grants and funding
    • Journal Meetings
    • Workshops
    • The Company of Biologists
    • Journal news
  • For authors
    • Submit a manuscript
    • Aims and scope
    • Presubmission enquiries
    • Article types
    • Manuscript preparation
    • Cover suggestions
    • Editorial process
    • Promoting your paper
    • Open Access
    • Biology Open transfer
  • Journal info
    • Journal policies
    • Rights and permissions
    • Media policies
    • Reviewer guide
    • Sign up for alerts
  • Contacts
    • Contacts
    • Subscriptions
    • Feedback
TECHNIQUES AND RESOURCES
EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning
Benoit Aigouy, Claudio Cortes, Shanda Liu, Benjamin Prud'Homme
Development 2020 147: dev194589 doi: 10.1242/dev.194589 Published 23 December 2020
Benoit Aigouy
1Aix Marseille University, CNRS, IBDM, 13288 Marseille, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Benoit Aigouy
  • For correspondence: benoit.aigouy@univ-amu.fr benjamin.prudhomme@univ-amu.fr
Claudio Cortes
1Aix Marseille University, CNRS, IBDM, 13288 Marseille, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shanda Liu
2Max Planck Institute for Plant Breeding Research, 50829 Köln, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Benjamin Prud'Homme
1Aix Marseille University, CNRS, IBDM, 13288 Marseille, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Benjamin Prud'Homme
  • For correspondence: benoit.aigouy@univ-amu.fr benjamin.prudhomme@univ-amu.fr

Handling Editor: James Briscoe

  • Article
  • Figures & tables
  • Supp info
  • Info & metrics
  • PDF + SI
  • PDF
Loading

ABSTRACT

Epithelia are dynamic tissues that self-remodel during their development. During morphogenesis, the tissue-scale organization of epithelia is obtained through a sum of individual contributions of the cells constituting the tissue. Therefore, understanding any morphogenetic event first requires a thorough segmentation of its constituent cells. This task, however, usually involves extensive manual correction, even with semi-automated tools. Here, we present EPySeg, an open-source, coding-free software that uses deep learning to segment membrane-stained epithelial tissues automatically and very efficiently. EPySeg, which comes with a straightforward graphical user interface, can be used as a Python package on a local computer, or on the cloud via Google Colab for users not equipped with deep-learning compatible hardware. By substantially reducing human input in image segmentation, EPySeg accelerates and improves the characterization of epithelial tissues for all developmental biologists.

INTRODUCTION

Epithelia are dynamic tissues undergoing dramatic shape changes throughout their development. A prerequisite for understanding these morphogenetic events is the thorough segmentation of cells constituting the tissue. To this aim, numerous semi-automated methods have been developed (Aigouy et al., 2016; Farrell et al., 2017; Cilla et al., 2015; Heller et al., 2016), but they require time-consuming manual correction to achieve optimal segmentation.

Over the past few years, deep learning, and more particularly convolutional neural networks (CNNs), has reshaped the computer vision field. In particular, deep-learning approaches should be beneficial for image segmentation because they could, in theory, reduce or even eliminate the need for end-user correction of the segmentation output. The advent of simple programming frameworks, such as Keras (https://github.com/fchollet/keras) and TensorFlow (Abadi et al., 2016 preprint), has made deep learning accessible to most developers but still excludes people lacking coding skills, preventing deep learning from being broadly adopted by the scientific community. A few attempts to bring CNNs to well-known image processing frameworks such as ImageJ or FIJI exist (Schmidt et al.; Weigert et al., 2018; Gómez-de-Mariscal et al., 2019 preprint; Schindelin et al., 2012; Schneider et al., 2012), but they require an up-to-date and adequately configured computer. More importantly, most often those powerful, yet very poorly generalizable, CNNs need to be trained de novo on user-provided data to work efficiently. Unfortunately, in most cases, such training cannot be done directly in FIJI or ImageJ and requires coding expertise. So far, little effort has been made to facilitate CNN training and use by regular users (von Chamier et al., 2020 preprint; Buchholz et al., 2020 preprint).

To address all these limitations, we present EPySeg, a coding-free solution to efficiently segment raw images of epithelial tissues, using a pre-trained neural network. Furthermore, EPySeg comes with a complete and straightforward graphical user interface (GUI), allowing users that are curious about deep learning, as well as more advanced users, to build and train custom networks to achieve any segmentation paradigm of interest. EPySeg is available at https://github.com/baigouy/EPySeg, and a minimal version can also be used on Google Colab (https://github.com/baigouy/notebooks) for users equipped with low-end graphics cards.

RESULTS AND DISCUSSION

In this study, we set out to develop a software that uses deep learning to automate the time-consuming segmentation of 2D epithelial tissue images. We selected LinkNet architectures, because they are known to perform well at image segmentation tasks (Chaurasia and Culurciello, 2017; also see Materials and Methods). Our network was trained on a large number of images of very divergent fly epithelia acquired using several microscopy setups (see Materials and Methods) to allow our segmentation paradigm to be robust and able to segment a broad range of epithelial tissues. Cell segmentation was generated using the watershed algorithm (Vincent and Soille, 1991), followed by careful manual curation to remove errors (see Materials and Methods). In EPySeg, this watershed segmentation was converted into a set of five watershed-like segmentations and two watershed seeds (see Materials and Methods) that the EPySeg neural network is trained to generate when given an input epithelial image (Fig. 1). The seven outputs generated by the neural network are combined into a single watershed mask upon averaging and thresholding (Fig. 1). This mask then corresponds to an optimized watershed-like segmentation of the tissue.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

EPySeg segmentation pipeline. An unseen image of cells labelled with a membrane marker is provided to the EPySeg pre-trained neural network. EPySeg produces seven outputs from it: five of them are watershed-like outputs, while the remaining two are watershed seeds. Those seven outputs are used to generate seven watershed masks. Upon thresholding the average of these seven masks, we obtain a refined mask.

EPySeg, although trained exclusively on fly epithelia, can efficiently segment evolutionarily distant 2D epithelial tissues imaged with different optics (Fig. 2; Table S1). We compared our software to Cellpose, the only software available to date that can segment cells without the need for prior model training (Stringer et al., 2020 preprint). On average, EPySeg outperformed Cellpose on epithelia in two ways: its approximation of the cell outline was more precise than that of Cellpose (Fig. S1, Table S1), and it missed fewer cells (Fig. 2; Fig. S2; Table S1). We note, however, that unlike Cellpose, EPySeg was not able to segment cells in culture (Table S1) and is likely to be less efficient at segmenting non-cellular objects than Cellpose, because it was not trained to accomplish such tasks.

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

EPySeg segmentation of unseen epithelium images. (A-D) EPySeg segmentation (red) overlaid on unseen images. (A) Segmentation of the Drosophila head epithelium, including ocelli, labelled with E-cadherin:GFP (greyscale). (B) Segmentation of the fourth leaf of a plasma membrane-labelled Arabidopsis thaliana plant at 7 days post germination (greyscale, UBQ10::acyl:tdTomato). (C) Segmentation of Phalloidin-labelled (greyscale) vertebrate dorsal pericardial wall epithelium (Cortes et al., 2018; Francou et al., 2017). (D) Segmentation of the Drosophila abdominal region surrounding a histoblast nest, labelled with E-cadherin:GFP (greyscale). Scale bars: 25 µm.

Finally, to make our epithelial segmentation tool easily accessible to a broad audience, we created a GUI and detailed documentation for its use (https://github.com/baigouy/EPySeg). This interface allows for building, training and running CNNs. It is built in such a way that non-expert users can rely on the default settings to easily train a network and gain knowledge on using deep learning for image analysis, whereas advanced users can visually fine-tune parameters to achieve optimal results. Because the majority of computers available in research labs are not deep learning-ready, we also provide a minimal user interface to run EPySeg online, in Google Colab, granting a broader audience access to deep-learning approaches (https://github.com/baigouy/notebooks).

MATERIALS AND METHODS

Recommended equipment

The EPySeg CNN was trained on a Dell Precision 7820 with 64 GB RAM, equipped with a Nvidia GeForce RTX 2070 graphic card with 8 GB RAM. Most training lasted less than 12 h. We could also successfully train and run our CNN on Google Colab, hereby providing a good alternative for users with deep learning-incompatible systems.

Data

The EPySeg CNN was trained on several Drosophila epithelia stained with E-cadherin:GFP that largely diverged from one another. One training set consisted of tissue from embryonic stages, where E-cadherin staining in epithelia appeared dotted (Tepass and Hartenstein, 1994; Truong Quang et al., 2013; Cavey et al., 2008) and the boundary-to-cytoplasm signal ratio was low. Another training set used pupal wing tissue, where E-cadherin staining appeared continuous and presented a higher boundary-to-cytoplasm ratio, except for stretched cells. Finally, our third training set contained images of the fly abdomen, including giant, polyploid, larval cells and tiny histoblast nest cells (Madhavan and Madhavan, 1980), in order to have a network that can segment cells without a size bias. Input images were either max- or stack-focuser projections (using Stack Focuser ImageJ plugin; https://imagej.nih.gov/ij/plugins/stack-focuser.html) of all or part of confocal z-stacks of epithelial tissues. Segmented cell outlines, serving as ground truth for training the network and for evaluating the segmentation quality, were generated using the watershed algorithm of Tissue Analyzer (Aigouy et al., 2016; Vincent and Soille, 1991). Importantly, we paid a lot of attention to the quality of the segmentation masks fed to the CNN, and we cropped out regions where segmentation quality was poor as well as regions that were not segmented (e.g. cells adjacent to the tissue of interest) in order not to perturb the learning process. For training the model, every watershed segmentation mask was used to generate seven images, the first image was the curated watershed mask itself, the second and third were the same watershed mask after one or two binary dilations, respectively. The fourth and fifth images were the negatives of the second and third images, respectively (akin to a non-cellular background). The sixth image was generated to contain a single seed (group of pixels) per cell, scaled by the cell size. The seventh image was the negative of the sixth image. The model was asked to reproduce these seven outputs for any given input. Two of the three training datasets were acquired on regular Leica or Zeiss confocal microscopes (Leica SP2 and LSM 510, respectively), whereas the third dataset was acquired on a spinning-disc microscope (Roper) to expand the breadth of optics used. The plant sample used for testing our segmentation is the fourth leaf of a 7 days after germination transgenic Arabidopsis thaliana labeled with UBQ10::acyl:tdTomato (modified from the construct by Willis et al., 2016). The vertebrate test sample is a ventral view of the dorsal pericardial wall epithelium stained with Phalloidin. Both test samples were acquired using a Leica SP8 upright confocal microscope and a Zeiss LSM 780, respectively. The fly wing and abdominal test samples were acquired using an Olympus FV-1000 confocal microscope. The fly head sample was acquired using an LSM 510 microscope.

Data augmentation

To further increase the size of our training set for deep learning (images and cells) and to prevent the neural network from overfitting, we used data augmentation: we randomly applied the same transformation (rotation, translation, magnification, flip, …) within a given range to both the input and output images. Our data augmentation algorithm currently supports 2D and 3D images (only 2D images were used in this study).

CNN building and training

Our CNN was generated using the segmentation_models library from (https://github.com/qubvel/segmentation_models) and relies on TensorFlow and Keras. We used a LinkNet (Chaurasia and Culurciello, 2017) architecture with a VGG16 encoder (Simonyan and Zisserman, 2015, preprint). We found that this encoder, known to perform well at classification tasks, was also very efficient at segmenting epithelia. Of note, the detailed model architecture is available in the log window of the software upon loading the model. The network was trained for 300 epochs on the complete training set at every epoch. We used Adam (Kingma and Ba, 2017 preprint) as the optimizer with an initial 10−3 learning rate for the first 150 epochs and a 10−4 learning rate for the next 150 epochs. The network was trained with a batch size of 24 images and a tile size of 256 pixels in width and height. We chose the intersection over union (IoU), also called the Jaccard index, for the loss function, because it is particularly well suited to evaluate differences between binary images (Rahman and Wang, 2016).

Segmentation quantification

To measure the accuracy of cell segmentation (i.e. quality of the cell mask) we used the SEG score (Ulman et al., 2017). Briefly, this measure evaluates the average amount of overlap between the reference segmentation and the corresponding neural network-generated segmentation. As a measure for segmentation quality (i.e. an evaluation of over- and under-segmentation), we used the average precision score (AP) defined as AP=TP/(TP+FP+FN), where FP corresponds to over-segmented cells and FN correspond to under-segmented cells. TP, the properly segmented cells, are defined as segmented cells having an IoU score ≥0.7 when compared with the corresponding ground truth cell.

Software

The software was entirely coded in Python 3. The graphical user interface was made with PyQT5 (Riverbank). The source code of our tool along with installation instructions can be found at https://github.com/baigouy/EPySeg.

Ethical approval

Animal experiments were carried out in agreement with national and European laws and approved by the Ethics Committee for Animal Experimentation of Marseille and the French Ministry for National Education, Higher Education and Research.

Acknowledgements

We would like to thank Robert Kelly and Miltos Tsiantis for sharing unpublished images with us.

Footnotes

  • Competing interests

    The authors declare no competing or financial interests.

  • Author contributions

    Conceptualization: B.A., B.P.; Software: B.A.; Resources: C.C., S.L., B.P.; Writing - original draft: B.A., B.P.; Writing - review & editing: B.A., C.C., S.L., B.P.; Supervision: B.P.; Funding acquisition: B.P.

  • Funding

    S.L. was supported by a Max Planck Core grant to Miltos Tsiantis. C.C. was supported by a grant from the Fondation Leducq to Robert Kelly (Transatlantic Network of Excellence 15CVD01). The project was supported by the Centre National de la Recherche Scientifique, the France-BioImaging/PICsL infrastructure (ANR-10-INSB-04-01) and the European Research Council under the European Union's Seventh Framework Programme [(FP/2007-2013)/ERC Grant Agreement 615789 to B.P.]. Deposited in PMC for immediate release.

  • Supplementary information

    Supplementary information available online at https://dev.biologists.org/lookup/doi/10.1242/dev.194589.supplemental

  • Received June 30, 2020.
  • Accepted November 17, 2020.
  • © 2020. Published by The Company of Biologists Ltd
http://creativecommons.org/licenses/by/4.0

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

Peer review history

The peer review history is available online at https://dev.biologists.org/lookup/doi/10.1242/dev.194589.reviewer-comments.pdf

References

  1. ↵
    1. Abadi, M.,
    2. Agarwal, A.,
    3. Barham, P.,
    4. Brevdo, E.,
    5. Chen, Z.,
    6. Citro, C.,
    7. Corrado, G. S.,
    8. Davis, A.,
    9. Dean, J.,
    10. Devin, M. et al.
    (2016). TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint, arXiv:1603.04467 [cs.DC].
  2. ↵
    1. Aigouy, B.,
    2. Umetsu, D. and
    3. Eaton, S.
    (2016). Segmentation and Quantitative Analysis of Epithelial Tissues. In Drosophila: Methods and Protocols (ed. C. Dahmann), pp. 227-239. New York: Springer.
  3. ↵
    1. Buchholz, T.-O.,
    2. Prakash, M.,
    3. Krull, A. and
    4. Jug, F.
    (2020). DenoiSeg: joint denoising and segmentation. arXiv preprint, arXiv:2005.02987 [cs.CV].
  4. ↵
    1. Cavey, M.,
    2. Rauzi, M.,
    3. Lenne, P.-F. and
    4. Lecuit, T.
    (2008). A two-tiered mechanism for stabilization and immobilization of E-cadherin. Nature 453, 751-756. doi:10.1038/nature06953
    OpenUrlCrossRefPubMedWeb of Science
  5. ↵
    1. Chaurasia, A. and
    2. Culurciello, E
    . (2017). LinkNet: Exploiting encoder representations for efficient semantic segmentation. In 2017 IEEE Visual Communications and Image Processing (VCIP), pp. 1-4. IEEE. doi:10.1109/VCIP.2017.8305148
    OpenUrlCrossRef
  6. ↵
    1. Cilla, R.,
    2. Mechery, V.,
    3. Hernandez De Madrid, B.,
    4. Del Signore, S.,
    5. Dotu, I. and
    6. Hatini, V.
    (2015). Segmentation and tracking of adherens junctions in 3D for the analysis of epithelial tissue morphogenesis. PLoS Comput. Biol. 11, e1004124. doi:10.1371/journal.pcbi.1004124
    OpenUrlCrossRef
  7. ↵
    1. Cortes, C.,
    2. Francou, A.,
    3. De Bono, C. and
    4. Kelly, R. G.
    (2018). Epithelial properties of the second heart field. Circ. Res. 122, 142-154. doi:10.1161/circresaha.117.310838
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Farrell, D. L.,
    2. Weitz, O.,
    3. Magnasco, M. O. and
    4. Zallen, J. A.
    (2017). SEGGA: a toolset for rapid automated analysis of epithelial cell polarity and dynamics. Development 144, 1725-1734. doi:10.1242/dev.146837
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Francou, A.,
    2. De Bono, C. and
    3. Kelly, R. G.
    (2017). Epithelial tension in the second heart field promotes mouse heart tube elongation. Nat. Commun. 8, 14770. doi:10.1038/ncomms14770
    OpenUrlCrossRef
  10. ↵
    1. Gómez-De-Mariscal, E.,
    2. García-López-de-Haro, C.,
    3. Donati, L.,
    4. Unser, M.,
    5. Muñoz-Barrutia, A. and
    6. Sage, D.
    (2019). DeepImageJ: A user-friendly plugin to run deep learning models in ImageJ. bioRxiv. doi:10.1101/799270
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Heller, D.,
    2. Hoppe, A.,
    3. Restrepo, S.,
    4. Gatti, L.,
    5. Tournier, A. L.,
    6. Tapon, N.,
    7. Basler, K. and
    8. Mao, Y.
    (2016). EpiTools: an open-source image analysis toolkit for quantifying epithelial growth dynamics. Dev. Cell 36, 103-116. doi:10.1016/j.devcel.2015.12.012
    OpenUrlCrossRefPubMed
  12. ↵
    1. Kingma, D. P. and
    2. Ba, J.
    (2017). Adam: a method for stochastic optimization. arXiv preprint, arXiv:1412.6980 [cs.LG].
  13. ↵
    1. Madhavan, M. M. and
    2. Madhavan, K.
    (1980). Morphogenesis of the epidermis of adult abdomen of Drosophila. J. Embryol. Exp. Morphol. 60, 1-31.
    OpenUrlPubMedWeb of Science
  14. ↵
    1. Rahman, M. A. and
    2. Wang, Y.
    (2016). Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation. In Advances in Visual Computing, Vol. 10072 (ed. G. Bebis et al.), pp. 234-244. Springer International Publishing.
    OpenUrl
  15. ↵
    1. Schindelin, J.,
    2. Arganda-Carreras, I.,
    3. Frise, E.,
    4. Kaynig, V.,
    5. Longair, M.,
    6. Pietzsch, T.,
    7. Preibisch, S.,
    8. Rueden, C.,
    9. Saalfeld, S.,
    10. Schmid, B. et al.
    (2012). Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676-682. doi:10.1038/nmeth.2019
    OpenUrlCrossRefPubMedWeb of Science
  16. ↵
    1. Schmidt, U.,
    2. Weigert, M.,
    3. Broaddus, C. and
    4. Myers, G.
    (2018). Cell detection with star-convex polygons. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 (ed. A. Frangi, J. Schnabel, C. Davatzikos, C. Alberola-López and G. Fichtinger), pp. 265-273. Cham: Springer International Publishing.
  17. ↵
    1. Schneider, C. A.,
    2. Rasband, W. S. and
    3. Eliceiri, K. W.
    (2012). NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671-675. doi:10.1038/nmeth.2089
    OpenUrlCrossRefPubMedWeb of Science
  18. ↵
    1. Simonyan, K. and
    2. Zisserman, A.
    (2015). Very deep convolutional networks for large-scale image recognition. arXiv preprint, arXiv:1409.1556v6 [cs.CV]https://arxiv.org/abs/1409.1556v6
  19. ↵
    1. Stringer, C.,
    2. Wang, T.,
    3. Michaelos, M. and
    4. Pachitariu, M.
    (2020). Cellpose: a generalist algorithm for cellular segmentation. Nat. Methods. doi:https://doi.org/10.1038/s41592-020-01018-x
    OpenUrlCrossRef
  20. ↵
    1. Tepass, U. and
    2. Hartenstein, V.
    (1994). The development of cellular junctions in the drosophila embryo. Dev. Biol. 161, 563-596. doi:10.1006/dbio.1994.1054
    OpenUrlCrossRefPubMedWeb of Science
  21. ↵
    1. Truong Quang, B.-A.,
    2. Mani, M.,
    3. Markova, O.,
    4. Lecuit, T. and
    5. Lenne, P.-F.
    (2013). Principles of E-cadherin supramolecular organization in vivo. Curr. Biol. 23, 2197-2207. doi:10.1016/j.cub.2013.09.015
    OpenUrlCrossRefPubMed
  22. ↵
    1. Ulman, V.,
    2. Maška, M.,
    3. Magnusson, K. E. G.,
    4. Ronneberger, O.,
    5. Haubold, C.,
    6. Harder, N.,
    7. Matula, P.,
    8. Matula, P.,
    9. Svoboda, D.,
    10. Radojevic, M. et al.
    (2017). An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141-1152. doi:10.1038/nmeth.4473
    OpenUrlCrossRefPubMed
  23. ↵
    1. Vincent, L. and
    2. Soille, P.
    (1991). Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 13, 583-598. doi:10.1109/34.87344
    OpenUrlCrossRef
  24. ↵
    1. von Chamier, L.,
    2. Laine, R. F.,
    3. Jukkala, J.,
    4. Spahn, C.,
    5. Krentzel, D.,
    6. Nehme, E.,
    7. Lerche, M.,
    8. Hernández-Pérez, S.,
    9. Mattila, P. K.,
    10. Karinou, E., et al
    . (2020). ZeroCostDL4Mic: an open platform to use Deep-Learning in microscopy. bioRxiv doi:10.1101/2020.03.20.000133
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Weigert, M.,
    2. Schmidt, U.,
    3. Boothe, T.,
    4. Müller, A.,
    5. Dibrov, A.,
    6. Jain, A.,
    7. Wilhelm, B.,
    8. Schmidt, D.,
    9. Broaddus, C.,
    10. Culley, S. et al.
    (2018). Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090-1097. doi:10.1038/s41592-018-0216-7
    OpenUrlCrossRefPubMed
  26. ↵
    1. Willis, L.,
    2. Refahi, Y.,
    3. Wightman, R.,
    4. Landrein, B.,
    5. Teles, J.,
    6. Huang, K. C.,
    7. Meyerowitz, E. M. and
    8. Jönsson, H.
    (2016). Cell size and growth regulation in the Arabidopsis thaliana apical stem cell niche. Proc. Natl Acad. Sci. USA 113, E8238-E8246. doi:10.1073/pnas.1616768113
    OpenUrlAbstract/FREE Full Text
View Abstract
Previous ArticleNext Article
Back to top
Previous ArticleNext Article

This Issue

RSSRSS

Keywords

  • Computer vision
  • Deep learning
  • Epithelia
  • Quantitative biology
  • Segmentation
  • Software

 Download PDF

Email

Thank you for your interest in spreading the word on Development.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning
(Your Name) has sent you a message from Development
(Your Name) thought you would like to see the Development web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
TECHNIQUES AND RESOURCES
EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning
Benoit Aigouy, Claudio Cortes, Shanda Liu, Benjamin Prud'Homme
Development 2020 147: dev194589 doi: 10.1242/dev.194589 Published 23 December 2020
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
TECHNIQUES AND RESOURCES
EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning
Benoit Aigouy, Claudio Cortes, Shanda Liu, Benjamin Prud'Homme
Development 2020 147: dev194589 doi: 10.1242/dev.194589 Published 23 December 2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Alerts

Please log in to add an alert for this article.

Sign in to email alerts with your email address

Article navigation

  • Top
  • Article
    • ABSTRACT
    • INTRODUCTION
    • RESULTS AND DISCUSSION
    • MATERIALS AND METHODS
    • Acknowledgements
    • Footnotes
    • Peer review history
    • References
  • Figures & tables
  • Supp info
  • Info & metrics
  • PDF + SI
  • PDF

Related articles

Cited by...

More in this TOC section

  • A developmental stage-specific network approach for studying dynamic co-regulation of transcription factors and microRNAs during craniofacial development
  • Mini-III RNase-based dual-color system for in vivo mRNA tracking
Show more TECHNIQUES AND RESOURCES

Similar articles

Other journals from The Company of Biologists

Journal of Cell Science

Journal of Experimental Biology

Disease Models & Mechanisms

Biology Open

Advertisement

Kathryn Virginia Anderson (1952-2020)

Developmental geneticist Kathryn Anderson passed away at home on 30 November 2020. Tamara Caspary, a former postdoc and friend, remembers Kathryn and her remarkable contribution to developmental biology.


Zooming into 2021

In a new Editorial, Editor-in-Chief James Briscoe and Executive Editor Katherine Brown reflect on the triumphs and tribulations of the last 12 months, and look towards a hopefully calmer and more predictable year.


Read & Publish participation extends worldwide

Over 60 institutions in 12 countries are now participating in our Read & Publish initiative. Here, James Briscoe explains what this means for his institution, The Francis Crick Institute. Find out more and view our full list of participating institutions.


Upcoming special issues

Imaging Development, Stem Cells and Regeneration
Submission deadline: 30 March 2021
Publication: mid-2021

The Immune System in Development and Regeneration
Guest editors: Florent Ginhoux and Paul Martin
Submission deadline: 1 September 2021
Publication: Spring 2022

Both special issues welcome Review articles as well as Research articles, and will be widely promoted online and at key global conferences.


Development presents...

Our successful webinar series continues into 2021, with early-career researchers presenting their papers and a chance to virtually network with the developmental biology community afterwards. Sign up to join our next session:

10 February
Time: 13:00 (GMT)
Chaired by: preLights

Articles

  • Accepted manuscripts
  • Issue in progress
  • Latest complete issue
  • Issue archive
  • Archive by article type
  • Special issues
  • Subject collections
  • Sign up for alerts

About us

  • About Development
  • About the Node
  • Editors and board
  • Editor biographies
  • Travelling Fellowships
  • Grants and funding
  • Journal Meetings
  • Workshops
  • The Company of Biologists

For authors

  • Submit a manuscript
  • Aims and scope
  • Presubmission enquiries
  • Article types
  • Manuscript preparation
  • Cover suggestions
  • Editorial process
  • Promoting your paper
  • Open Access
  • Biology Open transfer

Journal info

  • Journal policies
  • Rights and permissions
  • Media policies
  • Reviewer guide
  • Sign up for alerts

Contact

  • Contact Development
  • Subscriptions
  • Advertising
  • Feedback

 Twitter   YouTube   LinkedIn

© 2021   The Company of Biologists Ltd   Registered Charity 277992