Using Electron’s ipcRenderer and ipcMain to Create a Desktop Application

Taking a page from Event Oriented Programming.

Richie Pineda
6 min readAug 17, 2019
Photo by Helloquence on Unsplash

It’s an adage as old as time itself: “If it’s something that’ll put you out of your comfort zone, it’s probably something worth doing”. While the first part of that sentence is a bit of a white lie having just made that phrase up, the latter half rings truer than ever in my mind as my latest co-endeavor, QLStico, tested everything I had learned as a developer in the last several months. This post could easily be a recounting of every single lesson I learned during this project (GraphQL, React Hooks, React Context, SSL, etc.), but instead I’ll share the most salient lesson I took from the experience to help encourage any web developer curious about the jump into the desktop world. Namely, this was developing an offline application that services requests via Event Emitters instead of a traditional web server.

As the title suggests, QLStico was built using Electron.js which supports Javascript, HTML and CSS and is itself a Chromium window running the Node.js runtime environment as it’s backend. Development using Electron was a fantastic experience that allowed us to use the languages and tools we knew as web developers to build something fundamentally new to us.

So you want to build an offline desktop app, huh? There’s Event Emitters for that.

QLStico, essentially being a GraphQL infused tribute to Postico, was designed to be a lightweight PostgreSQL database visualizer and CRUD operator. Despite its simplicity, the trouble came when we as web developers had to implement a more traditional Model-View-Presenter approach to development. We found that our app would be dealing almost exclusively with the the internal PostgreSQL files of a computer and therefore a traditional server-client setup with routes and URLs did not fit the purpose. Maybe you’re on the same boat, deciding whether you want to build some sort of standalone, offline application or tool. If that is the case like it was ours, the question of “how do you service all the navigation and retrieval of information without a server?” comes about to which Electron answered: ipcRenderer and ipcMain. To quote straight from Electron’s docs:

The ipcRenderer module is an instance of the EventEmitter class. It provides a few methods so you can send synchronous and asynchronous messages from the render process (web page) to the main process. You can also receive replies from the main process.

The ipcMain module is an instance of the EventEmitter class. When used in the main process, it handles asynchronous and synchronous messages sent from a renderer process (web page). Messages sent from a renderer will be emitted to this module.

In other words, they are fancy async Event Emitters providing you a line of communication to and from Electron’s main process and the browser window. In our case we used them to provide a sort of pseudo-middleware layer whereby actions on the front end trigger an event to fire off a function somewhere else in our Node code, and the front end call would listen for the response. This allowed us to keep our frontend and backend modularized and decoupled such that the frontend was never directly talking to any backend code, our middleman ipcRenderer/ipcMain handled that for us. To better illustrate this, consider the simplified file structure of QLStico:

app
┣ pg
┃ ┗ pg.js
┣ main
┃ ┣ main.js
┣ components
┃ ┣ AllDBs.js

The “components” folder represents the browser-side code, the “pg” folder represents the backend PostgreSQL client code, and the “main” folder contains the main Electron process code.

In this example, let’s walk through how we used the ipcRenderer/ipcMain to retrieve the names of tables inside of a database. The frontend code inside of the “AllDBs.js” component is as follows:

//Inside AllDBs,js file:import React, {useContext} from 'react';
const { ipcMain } = require('electron');
const AllDbs = props => { (...) function getTablesFromDb(dbName) {
ipcRenderer.send("GET_TABLE_NAMES", dbName);
ipcRenderer.once("GET_TABLE_NAMES_REPLY",
(_, tableNamesResponse)=> {
setTableNamesInContext(tableNamesResponse);
props.history.push('/allTables');
};
);
};
return (...)
};

The flow of the function is simple: we dispatch ipcRenderer.send with the channel name (“GET_TABLE_NAMES”) we would like to “broadcast” our request to as well as the name of the database requested to access as the second argument. The magic happens on the ipcRenderer.once, containing the channel name the program is listening for (“GET_TABLE_NAMES_REPLY”) and a callback on what to do with our response from that channel. In this case we are using .once as we do not want to keep this listener “subscribed” to that channel. Once it gets our payload it stops listening and frees up that listener until the next time this function is invoked (listeners are valuable resources and too many listeners active can start impacting performance, you’ll want to save the full-time listeners for your ipcMain code). The callback function passed in as the second argument of ipcRenderer.once simply takes the response we get from our reply channel, which in this case is “_” (first argument is always the event object which here we don’t need) and tablesResponse that contains all the names of the tables in the database, and executes our business logic by passing that into our React Context Provider and then uses the ‘react-router-dom’ HashRouter to push us into the next view with props.history.push(). See, that was easy!

Turning our attention to the backend, i.e. everything that happens in between “broadcasting” our request and “listening” for the response on the front end, the code is as follows:

// Inside pg.js file:const pg = require('pg')(...)const getAllTables = dbName => {    DB_CONNECTION.setDatabase(dbName);
const pool = new pg.Pool(DB_CONNECTION);

try {
const response = await pool.query(
`SELECT table_name FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_schema NOT IN
('pg_catalog', 'information_schema',
'management','postgraphile_watch') and
table_name!='_Migration'`
);
return response.rows.map(({ table_name: tableName })
=> tableName);
} catch (error) {
return error.message;
}
};module.exports = { getAllTables }

It is worth mentioning for the sake of this example what is actually going on inside of pg.js: the getAllTables function is simply using the pg package to create a client connection with PostgreSQL and return the requested information. In this case it is an array containing the names of the tables in a database. The file also contains configuration settings for the PostgreSQL connection.

// Inside main.js file:const { ipcMain } = require('electron');
const getAllTables = require('../pg/pg.js')
(...)ipcMain.on("GET_TABLE_NAMES", (event, dbName) => {
const tableNames = getAllTables(dbName);
event.reply("GET_TABLE_NAMES_REPLY", tableNames);
});

Notice that inside main.js, we’re now using ipcMain instead of ipcRenderer because we are writing code to execute on the Node side of Electron, i.e. the main process, and are no longer inside of the browser. An important distinction, but both essentially serve the same purpose! (Also an important note that your ipcMain functions should be inside your main Electron file where everything Electron related is being spun up).

Notice the first argument passed into ipcMain.on, look familiar? That’s the same channel name we used to emit our request from the front end, and we’re using .on here because we always want this to be listening for a request for “GET_TABLE_NAMES”. Needless to say this is where our request is recognized, and now we can give instructions on what to do when it receives a request from that channel. In this case we call getAllTables() so it can do all the heavy PostgreSQL lifting and assign its result to tableNames. Notice again that the second argument is a callback, with the first argument in the callback being event. This is how we “direct” our response to where the event was emitted, which is exactly what event.reply does. We “reply” with another emitted event, this time as “GET_TABLE_NAMES_REPLY” and a payload of tableNames which again looks familiar — it’s the what the listener is assigned to on the front end’s ipcRenderer.once!

Hopefully the process flow is a bit more clear now — we have our front end code using ipcRenderer.send to emit an event, ipcMain.on on the main process always listening for an event and executing some backend functionality, then emitting a reply via event.reply which the front end should be listening for via ipcRenderer.once to in turn do its own thing with the response.

While this was certainly a specific and simplified example, hopefully it was enough to illustrate and inspire. Happy coding!

--

--