- Proposes CRISP-ML(Q), a process model for developing machine learning applications with quality assurance methodology
- Extends CRISP-DM by adding:
- Quality assurance methods to mitigate risks in each phase
- A monitoring and maintenance phase post-deployment
- Merging of business and data understanding phases
- Provides best practices and guidelines for each phase of the ML process
- Aims to increase success rate and efficiency of ML projects in industrial settings
- Covers the entire ML development lifecycle, from defining objectives to maintenance
This Docker setup provides a robust environment for network sniffing activities using Scapy within JupyterLab. Here's a quick overview of the key components:
The Dockerfile builds upon the jupyter/datascience-notebook
image, adding:
libpcap-dev
for packet capture capabilities- Custom Python requirements (scapy)
This guide outlines the steps to build and run a JupyterLab container with specific directories and permissions, following a common structure needed for data science projects.
- Docker and Docker Compose installed on an Ubuntu system.
- Your project should include a Dockerfile and docker-compose.yml file, as outlined earlier in the discussion.
- Create the Source Directories: Create the src directory along with its subdirectories. These will be used as volumes in your Docker containers.
This is my tutorial on how to create and publish a webapp written in Python, interfaced with Streamlit and brought online with Heroku.
Creating algorithms and applications has a precise purpose: to use them outside of your PC. Using them "locally" risks being a big limitation. Hence my need to find a way to bring small applications online, so that they can be used by more people, without having to go through me or my PC.
To do this you need 3 ingredients: GitHub, Streamlit and Heroku.
With modern technological tools it is becoming increasingly easy to build your own web identity that is not only linked to the world of social networks. In this post we will see how to build a blog using Pelican, a way to build static websites based on Python, one of the most popular languages of the moment.
Building it will be the first step, making it operational will be the second, and for this goal we make use of the possibilities offered by GitHub, the most known code-repository that allows to host for each user its own web page.
The purpose? From creating an alternative Curriculum Vitae, to telling and spreading examples of work, projects or simple tutorials, like this one. Let's get started with the first step!
A virtual environment is useful in case you want to try particular libraries and extensions of a particular programming language (eg Python), without having to worry about creating conflicts with previous (or later) versions already installed on your machine.
For example: the XY library only works with version 3.0 of the Z tool, but you have version 4.0 installed on your PC. To avoid creating conflicts by uninstalling the current version, creating a virtual environment is the ideal solution!
These steps can be performed comfortably via command prompts. Let's get started!
- First of all you need to install the package that allows the creation of a virtual environment. This is possible by running the following code: