Python
This guide includes examples of how to set up the following Python applications to deploy on Kinsta’s Application Hosting services from a GitHub repository.
Prerequisites
- Kinsta’s quick start templates are stored and managed in GitHub; therefore, you need a GitHub account to access them.
- You need to create a MyKinsta account to deploy the application.
Python
- Log in to GitHub and create a new repository from this template (Use this template > Create a new repository): Kinsta – Python Starter.
- In MyKinsta, click Applications > Add application > select GitHub, click Connect git provider > Authorize, and log in to your GitHub account.
- Choose the Python Starter repository and a Data center location. Leave all other settings as default and click Continue on each step.
- On the Summary step, click Deploy now.
During deployment, Kinsta automatically detects the Start command for the web process from the Procfile in the repository and installs dependencies defined in your requirements.txt file. The app is available as soon as the deployment finishes, and the Kinsta Welcome page loads at your application’s URL.
Prefer to watch the video version?
Web Server Setup
When you deploy an application and include a Procfile in the repository, Kinsta automatically creates a web process based on the Procfile in the root of the repository. Use this command in your Procfile to run your web server:
web: python server.py
Django
This is an example of how to set up a Django application to deploy on Kinsta’s Application Hosting services from a GitHub repository.
- Log in to GitHub and create a new repository from this template (Use this template > Create a new repository): Kinsta – Hello World – Django.
- In MyKinsta, click Applications > Add application > select GitHub, click Connect git provider > Authorize, and log in to your GitHub account.
- Choose the Hello World – Django repository and a Data center location. In Environment variables, in Key 1 enter
SECRET_KEY
, and in Value 1, add a random string, select Available during runtime and Available during build process. - Leave all other settings as default and click Continue on each step. On the Summary step, click Deploy now.
The python manage.py collectstatic
command executes at every build to collect all static files to the directory defined in STATIC_ROOT
. During deployment, Kinsta automatically detects the required command from the Procfile in the repository and installs dependencies defined in your requirements.txt file. The app is available as soon as the deployment finishes, and the default Django page confirming successful installation loads at your application’s URL.
Prefer to watch the video version?
Environment Variables
The SECRET_KEY
should not be stored in your repository but rather set up in an environment variable with a random string.
Web Server Setup
Start Command
When you deploy an application and include a Procfile in the repository, Kinsta automatically creates a web process based on the Procfile in the root of the repository. Use this command in your Procfile to run your web server:
web: gunicorn helloworld.wsgi
Flask
This is an example of how to set up a Flask application to deploy on Kinsta’s Application Hosting services from a GitHub repository.
- Log in to GitHub and create a new repository from this template (Use this template > Create a new repository): Kinsta – Hello World – Flask.
- In MyKinsta, click Applications > Add application > select GitHub, click Connect git provider > Authorize, and log in to your GitHub account.
- Choose the Hello World – Flask repository and a Data center location. Leave all other settings as default and click Continue on each step.
- On the Summary step, click Deploy now.
During deployment, Kinsta automatically detects the Start command for the web process from the Procfile in the repository and installs dependencies defined in your requirements.txt file. The app is available as soon as the deployment finishes, and the Kinsta Welcome page loads at your application’s URL.
Prefer to watch the video version?
Web Server Setup
Start Command
When you deploy an application and include a Procfile in the repository, Kinsta automatically creates a web process based on the Procfile in the root of the repository. Use this command in your Procfile to run your web server:
web: gunicorn helloworld.wsgi
Langchain With a Dockerfile
This is an example of how to set up a LangChain application with a Dockerfile to deploy on Kinsta’s Application Hosting services from a GitHub repository.
The LangChain framework is intended to develop language-model-powered applications that are data-aware, agentic (allow a language model to interact with its environment), and differentiated. More information is available on the LangChain website.
- Log in to GitHub and create a new repository from this template (Use this template > Create a new repository): Kinsta – Hello World – LangChain.
- Log in to OpenAI (create an account if you do not already have one). Go to OpenAI API and generate and copy your API key.
- In MyKinsta, click Applications > Add application > select GitHub, click Connect git provider > Authorize, and log in to your GitHub account.
- Choose the Hello World – LangChain repository and a Data center location. In Environment variables, in Key 1, enter
OPENAI_API_KEY
, and in Value 1, paste the API key you copied from Open AI and click Continue. - In the Build environment step, select Use Dockerfile to set up container image. The Dockerfile path and Context can be left blank.
- Click Continue > Continue, and on the Summary step, click Deploy now.
During deployment, Kinsta automatically installs dependencies defined in your requirements.txt file. The app is available as soon as the deployment finishes, and the Kinsta Welcome page loads at your application’s URL.
Web Server Setup
Build Environment
When creating your LangChain application, you must choose Use Dockerfile to set up container image in the Build environment step.
Environment Variables
In Environment variables, in Key 1, enter OPENAI_API_KEY
, and in Value 1, paste the API key you copied from Open AI. If you use different models (not OpenAI’s), adjust the key and value as needed.