🚀 Contributing to SpeechBrain

The goal is to collectively write a set of open-source libraries for Conversational AI. It is crucial to write a set of homogeneous libraries that are all compliant with a set of guidelines described in our documentation .

🌟 Zen of SpeechBrain

SpeechBrain could be used for research, academic, commercial, non-commercial purposes.If you want to contribute, keep in mind the following features:

Simplicity: the code must be easy to understand even by students or users that are not professional programmers or speech researchers. Design your code such that it can be easily read. Given alternatives with the same level of performance, code the simplest one.

Modularity: Write your code to be modular and well-fitting with the other functionalities of the toolkit. The idea is to develop a bunch of models that can be naturally interconnected with each other.

Efficiency: The code should be as efficient as possible. Contributors should maximize the use of pytorch native operations

Documentation: Given the goals of SpeechBrain, writing rich and good documentation is a crucial step. Write docstrings with runnable examples (as done in PyTorch code).

🔧 How to get my code into SpeechBrain?

SpeechBrain is hosted via GitHub . Contributing requires three steps:

1. Fork, clone the repository and install our test suite as detailed in the documentation .
2. Write your code and test it properly. Commit your changes to your fork with our pre-commit tests to ensure tests are passing. Then open a pull request on the official repository.
3. Participate in the review process. Each pull request is reviewed by one or two reviewers. Please integrate their feedback into your code. Once reviewers are happy with your pull request, they will merge it into the official code.

Details about this process (i.e including steps for installing the tests) are given in the documentation .

🙌 How can I help?

Examples of contributions include new recipes, new models, new external functionalities, solving issues/bugs.

🌟 Contributors

We would like to thank the following contributors: