Before we start, I would like to give a quick background story behind this post. This is a sub-post of the “Building Full-stack web app with PostGraphile and React”. Before you start make sure you’ve PostgreSQL running on your machine.
Knex.js is an SQL query builder for Postgres, MSSQL, MySQL, MariaDB, SQLite3, Oracle, and Amazon Redshift. It’s flexible, portable and fun to use. We are going to use its migration to maintain our database schema.
Knex-migrate will keep track of our schema changes and dummy data. It is a very useful tool while migrating database. Knex-migrate will have all the files and logs. We just need to run command and knex-migrate will take care of all schema buildings and entering static data into the database.
Let’s get started with our schema building with knex-migrate. Make sure you’ve knex and knex-migrate installed into your project. If not run following command.
“One major advantage of this is you can clone your project in fresh environment and run knex migrations scripts. It will take care of your everything.”
npm install knex knex-migrate
Navigate to theserver folder and run the following command.
npx knex init.
This command will generate knexfile.js under theserver folder. This is an auto-generated file by knex , that consist all configuration to connect to the database based on the environment such as development, staging, and production. I have updated the development connection under knexfile.js to connect to our postgres database. Your file should look as follows:
Here is the .env file for your reference.
We can update this file based on our database configuration and environment. As per this file configuration, we need to create new folders named migrations and seeds under server/db folder.
migrations will consist of files that will determine our schema structure. seeds will consist of files that will be our dummy data or static data we want to insert into the database.
Now, to use the config connection with knex we need to create knex.js under src/db. This file will export a connection module based on the environment.
Now, our folder and file structure should look like this:
In this step, we’ll create tables in our database using knex-migrate and insert our dummy data into the same. For more details about knex schema , check out: http://knexjs.org/#Schema
In order to create schema migration, we need to run one command that will create an autogenerated file under the migration folder with a date-time stamp included in the file name.
For example: 20200122192011_migration_create_table.js
Here’s the command to create a migration file:
knex migrate:make migration_create_table
I have updated 20200122192011_migration_create_table.js file with the schema details as per knex documentation.
In order to update this schema to our Postgres database, we’ll need to run the following command:
npx knex migrate:latest
Now we can go to our Postgres database and we’ll see that knex-migrate has created tables according to schema we configured.
As you see tables created via knex-migrate. (This screenshot is from pgAdmin 4)
Now that we have our tables ready, we can insert some data using seeds. We will create two seed files: one for the user and one for post. Here are the commands and sample files I have created for seeds:
npx knex seed:make 01_users
npx knex seed:make 02_posts
Unlike migration , these commands will create an autogenerated files under seeds.
For example: 01_users.js and 02_posts.js
One important thing to remember while creating seed files is to have the filename start with an incremental number, for chronology. Read more details on this here: http://knexjs.org/#Seeds-CLI.
Here are the sample seed files I have created to insert users and one blog associated with that user:
Here, I am using .del() function to delete all existing entries. You can remove that and update file, based on your requirement.
Now, let’s run the following command to run seeds files:
npx knex seed:run
We hope that this has given you an idea for how you can use knex-migrate for your schema building and to maintain a log of all schema-level operations.
P.S: We’re hiring at Heady! 👋 If you would like to come work with us on some really cool apps please check out our careers page.