Passing UTM Name/Value Pairs to Specific Links on a Website

Adding UTM parameters to links is useful for tracking marketing efforts, e.g. if you have a banner or an email with links to a landing page, you’ll want to know which method (banner or email) generated the most page visits and form fills. Google has a campaign URL builder that will generate URLs with UTMs for you. In Google Analytics, you can find pageviews to the landing page by UTM parameter. However, if you want to track any subsequent pages after the landing page, then you’ll need a way to pass the UTMs along to the subsequent pages. In my particular situation, I needed to pass UTMs to a 3rd-party site. The visitor flow would be like this

  1. Click a banner on the home page of example.com. The banner has UTMs in the query string, e.g. example.com/landing-page?utmsource=home-page-banner
  2. Land on an overview page on example.com, e.g. example.com/landing-page
  3. Maybe visit other pages on example.com
  4. Return to example.com/landing-page
  5. Click a link to register for something on a 3rd-party site, e.g. foo.com/register

By default, only the first pageview of example.com/landing-page would include UTMs in the URL. To pass the UTMs to the link to the 3rd-party site, something extra needed to be done. I chose the following approach, which works well.

  1. Write JavaScript code that runs on all pages.
  2. If a URL contains UTM params, save the UTM name/value pairs as session cookies, overwriting any existing UTM cookies.
  3. If a page has any <a> tags with the class “appendUTM”, then rewrite the href value by appending the UTM params.

I then added the class “appendUTM” to any links where I wanted to append the UTMs. In my case, it was the links to the 3rd-party registration site.

Below is the code that accomplishes this.

Separating Website Images and Bulk Uploading Them to AWS S3 Using AWS CLI

I recently had to move 35,000+ website images from Git to AWS S3. The images were in many subfolders. First, I had to separate the images from all other files. Then, when I tried dragging and dropping the parent folder containing all images to the AWS S3 web interface, I had to wait 9 to 17 hours.

When I woke up in the morning, I found the upload completed with errors:

Here’s how I easily separated the images from all other files and successfully uploaded all 35,000+ images.

Separate images from other files

First, I wanted to see a list of all unique file extensions so I could know what image file extensions were being used.

find . -type f | sed 's|.*.||' | sort -u

This returned a list like the one below.

JPG
PNG
ali
bmp
brs
cnd
CSS
...

Then, I copied the website root folder and made a new sibling folder called website-images where I’d just have the images.

Then, I deleted all images from the “website” folder using the following command.

find . -type f ( -name 'gif' -or -name 'jpg' -or -name '.JPG' -or -name '.png' -or -name '.PNG' -or -name '.bmp' ) -delete

Then, I deleted all non-image files from the “website-images” folder using the following command.

find . -type f -not ( -name 'gif' -or -name 'jpg' -or -name '.JPG' -or -name '.png' -or -name '.PNG' -or -name '.bmp' ) -delete

The last command resulted in a bunch of empty folders. I deleted the empty folders using this command.

find . -empty -type d -delete

I then wrote and ran a NodeJS script to walk through each HTML file and convert each relative path to a path relative to the website root folder, e.g.

../assets/images/images/home_page_banner.png

to

/public/en/csam/latest/assets/images/images/home_page_banner.png

Upload all images to S3

As mentioned earlier, uploading 35000 images to S3 using the web interface took a long time and kept completing with errors. What ended up working was uploading the images using the AWS CLI. Here’s how I did it.

Install AWS CLI

Here’s the documentation. I ran the command line installer.

curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

Create an access key

I had to create an access key to authenticate. I created a new Identity and Access Management (IAM) user and then clicked the “Create access key” button to generate a new key.

I ended up with 2 keys similar to the ones below:

  1. Access key: AKIAIOSFODNN7EXAMPLE
  2. Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Set access key as environment variables

I then saved those key values as environment variables. Here are the instructions. I basically ran the following commands in the terminal, replacing the values with my actual values.

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2

For the default region, I chose the region for my S3 bucket.

Upload (sync) files

I then uploaded (synced) files from my local to my remote S3 bucket. Here’s the documentation for the S3 sync command. Since I had already uploaded some files, I was hoping to find a flag to skip uploading files that exist at the destination. It turns out that the “sync” command does this by default. I ran the following command in dry-run mode to verify the output was correct.

aws s3 sync . s3://q-website-images/docs/ --dryrun

Then, I reran the command without the dry-run flag.

aws s3 sync . s3://q-website-images/docs/ 

The command output a list of the files it uploaded.

When it was done, I tried rerunning the command only to find that it completed with no output, indicating that all source files already existed in the destination. That was a sign that the sync was complete. Looking at the number of files in the S3 web console, I could see the correct number of files listed there.

Now that the images are in S3, I’ll use S3 as the origin for an image CDN (ImageKit). ImageKit will auto-optimize the images.

Find Pageviews From Custom UTM Parameters in GA4

Google Analytics version 4 (GA4) is quite different than the previous version, called Universal Analytics (UA). GA4 is event-based, and the UI is quite different. If you’ve got a link with UTM parameters like

www.example.com/?utm_source=marketo&utm_medium=email&utm_name=feb23-campaign&utm_id=12345

In GA4, if you go to Reports > Engagement > Pages and screens, you will see stats like pageviews for many pages. You can then filter to just one page like a free trial page by entering the page’s path in the search field, e.g. “/free-trial/”. You can then add a secondary dimension for source and medium. What you’ll end up will be something like this

This may not include the source and medium in your UTM parameters. A better way to get the traffic report based on a specific source and medium or name is by going to Explorations.

Here, you can create a new exploration. In the left “Variables” column

  • give the exploration a name like “Feb 2023 Campaign”
  • add some dimensions like
    • Page path and screen class
    • Session campaign
    • Session source / medium
  • add some metrics like “Views” and “Sessions”

In the middle “Settings” column,

  • drag some or all dimensions from the left column to the “Rows” field
  • drag some or all metrics from the left column to the “Values” field
  • add some filters like
    • Session source / medium contains market
    • Session campaign contains “Feb 2023 Campaign”

You will then see the report on the right.

Here’s the mapping between UTM query parameter and UTM dimension in GA4.

To find the number of clicks on a link with a UTM, go to

  1. Reports > Acquisition > Traffic acquisition
  2. In the primary dimension, choose session source or session medium or session campaign
  3. In the Search field, enter a value for the session source or session medium or session campaign
  4. Choose a date range
  5. Scroll to the right and under “Event count”, choose “click”.

Learn more about GA4 traffic sources

Programmatically Migrate a Website From Handlebars to Nunjucks

I’m currently migrating a large website from Handlebars to Nunjucks. Since the website is being updated daily, and because there are too many pages, I can’t convert the Handlebars syntax to Nunjucks syntax manually. To solve this, I started writing a script to convert the syntax programmatically using JavaScript (nodeJS). So far, it’s working very well. Here’s how I’m doing it, and how you can do something similar when confronted with a migration project.

Basically, the way it works is

  1. it recursively finds all files in a folder called “temp”
  2. if the file path ends with “hbs” – indicating it is a Handlebars file – then for each file, it executes a series of regex search and replace commands, e.g.
    • replace {{#if class}} with {% if class %}
    • replace {{/if}} with {% endif %}
    • and so on.

Those are simple search-and-replace situations. There may be a situation where you’ll need an advanced search and replace, e.g. when replacing

{{> social-list 
    dark="true" 
    centered="true"}} 

with

{% set dark="true" %}
{% set centered="true" %}
{% include social-list.njk %}

In this case, you can use a “replacer” function, which allows you to do much more to manipulate the output.

If you need to test regular expressions, https://regexr.com/ may come in handy.

When you’re all done and you’ve built the HTML files from both the handlebars templates and the nunjucks templates, you can write a script that recursively reads all HTML files in the build output folder and lists each HTML file path generated from each handlebars and nunjucks template along with their respective file size. The file sizes should be the same or almost the same. If some are not, then the migration script didn’t convert those templates correctly. Maybe something like:

File PathFile Size (KB) (hbs)File size (KB) (njk)Difference (KB)
/index.html181.3180.70.6
/about/index.html153.223.5129.7
/products/index.html353.6350.23.4

Moom: MacOS Window Management with Instant Arrangements

With so many people working both from home and at the office, it can become annoying to have to rearrange your application windows when you move between the two locations. This is especially true for people like me who need multiple monitors, two of which are 32″ 4K ones as shown below, which I need to display multiple windows on each screen.

Though I have a similar setup at home, my application windows always get jumbled up when I move between locations, possibly because the standalone monitors are not all the same brand with the same exact resolution.

Most window management apps allow you to move and resize windows in a grid, e.g.

  • left 50% of screen,
  • bottom 50% of screen,
  • right 33% of screen,
  • top 50%, left 50% of screen,
  • etc

These are fine if you aren’t going to move locations often and don’t have too many windows. If you want the same layout spanning multiple monitors and the ability to instantly move and resize all windows to that layout, then I recommend Moom. Here’s how to use Moom to save layouts for multiple monitor configurations.

  1. At location 1, e.g. work, open your applications and arrange them how you like
  2. Open Moom and create a custom preset with the following settings
    • Type: Arrange Windows
    • Name: I put “3 Monitors – Work”
    • Uncheck all checkboxes
  3. Click “Update Snapshot”

This saves the layout as a preset. To test it, resize and move all your windows around. Then, hover over the green dot in any one window and click on the preset. All windows will instantly move to how you had them.

When you’re at home, you can create another preset and call it something like “3 Monitors – Home”. Now, you no longer have to mess around with moving windows around. Just click on a preset from any open window and get back to business.

Moom has a one-time cost of $10, but it’s obviously worth it.

Make Multiple REST API Calls in Serial and Parallel Using Node.js

In this tutorial, I will explain how we can fetch remote paginated JSON data synchronously (in serial) and asynchronously (in parallel).

Data

You can get test data to fetch from RapidAPI, but I’m going to fetch video data from Vimeo using the Vimeo API.

Fetch Method

There are many ways you can fetch remote data. The RapidAPI website provides code snippets for various languages and fetch methods. For example, for Node.js, there’s HTTP, Request, Unirest, Axios, and Fetch.

Some services like Vimeo provide libraries and SDKs in a few languages like PHP, Python and Node.js. You can use those as well if you’d like.

I’m actually going to use the Got library [GitHub], which is a very popular library.

CommonJS vs ESM

Many of the latest Node packages are now native ESM instead of CommonJS. Therefore, you can’t require modules like this

const got = require('got');

Instead, you must import modules like this

import got from 'got';

According to this page, you can convert your project to ESM or use an older version of the got package that uses CommonJS.

If using ESM, you need to put "type": "module" in your package.json.

Authentication

Many services like Vimeo require authentication in order to use their API. This often involves creating an access token and passing it in the header of the API call like this

In cURL:

curl https://api.vimeo.com/tutorial -H "Authorization: bearer {access_token}"

As HTTP:

GET /tutorial HTTP/1.1
  Host: api.vimeo.com
  Authorization: bearer {access_token}

Setup

Let’s set up our project. Do the following:

  1. Create a new folder, e.g. test
  2. Open the folder in a code editor (I’m using VisualStudio Code)
  3. Open a terminal (I’m doing it in VS Code)
  4. Initialize a Node project by running npm init -y

This will generate a package.json file in the folder.

Since we’re using ESM and will import modules rather than require them, add the following to the package.json file.

"type": "module"

Call the Vimeo API

Let’s start by calling the Vimeo API just once. Create a new file called get-data-one.js and copy the following contents into it. Replace {user_id} with your Vimeo user ID and {access_token} with your Vimeo access token.

import got from 'got';

let page = 1;
let per_page = 3;
let fields = "privacy,link,release_time,tags,name,description,download";

const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
const options = {
  method: 'GET',
  headers: {
    'Authorization': 'bearer {access_token}'
  }
};

let data = await got(url, options).json();
console.log(data);

We’re importing the got library. For this to work, we need to install the got package. Run the following command.

npm install got

This will download the got package and its dependencies into the node_modules folder.

In the code, the Vimeo endpoint we’re calling is /users/{user_id}/videos, which returns all videos that a user has uploaded. According to the API docs, we can

  • Specify the page number of the results to show using page
  • Specify the number of items to show on each page of results, up to a maximum of 100, using per_page
  • Specify which fields to return using fields

These parameters can be added to the endpoint URL in the query string, which is what we’ve done. However, for this test, we’ll just call one page and return the records (videos). We then call the API using the got library and then dump the results to the console. Let’s run the script and check the output. Run the following command.

node get-data-one.js

As expected, here’s the output.

The output starts with pagination info and the total number of available records (videos) followed by the actual data in the form of an array of video objects. In this case, we see 3 objects because we set per_page to 3.

Let’s update our code to write the output to a file. That will make it easier to read when there’s a lot of data. Add the following code snippets

import fs from "fs";
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
    stream.once('open', function(fd) {
    stream.write(JSON.stringify(data)+"\n");
    stream.end();
});

so the code looks like this:

import fs from "fs";
import got from 'got';

let page = 1;
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";

const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
const options = {
  method: 'GET',
  headers: {
    'Authorization': 'bearer {access_token}'
  }
};

let data = await got(url, options).json();
console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
    stream.once('open', function(fd) {
    stream.write(JSON.stringify(data)+"\n");
    stream.end();
});

We don’t need to install the fs package because that’s included in Node by default. The stream will write data to a file we’ll call video-data.json and we pass it the “w” flag to overwrite any existing contents of the file.

When we rerun the script, we see the file is created. We can format (prettify) it so it’s easy to read.

Call the Vimeo API Multiple Times in Serial with Pagination

Now, let’s say we want to fetch more data, but the API limits how many records are returned in a single call. In this case, we need to call the API in a loop passing a different page number. Let’s create a new file called get-data-serial.js with the following code.

import fs from "fs";
import got from 'got';

let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";
const options = {
    method: 'GET',
    headers: {
      'Authorization': 'bearer {access_token}'
    }
}

for(let page = 1; page <= 3; page++) {
    const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;

    let somedata = await got(url, options).json();
    data.push(somedata);
    console.log(page);
};

console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
    stream.once('open', function(fd) {
    stream.write(JSON.stringify(data)+"\n");
    stream.end();
});

Here, I’m using a simple for loop to loop through 3 pages. I also created a data variable as an empty array. With each loop iteration, I push the page’s returned data to the data array. When all is done, I write the data array to a file, which looks like this.

I collapsed the “data” array so we can see that 3 pages of data were returned. We ran this in serial so the order of the output is page 1, page 2, and page 3.

Call the Vimeo API Multiple Times in Parallel with Pagination

Now, let’s do the same thing, but asynchronously (in parallel). Create a new file called get-data-parallel.js with the following code.

import fs from "fs";
import got from 'got';

const options = {
    method: 'GET',
    headers: {
    'Authorization': 'bearer {access_token}'
    }
};

let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";
let pages = [1,2,3];

await Promise.all(pages.map(async (page) => {
    const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=2&fields=privacy,link,release_time,tags,name,description,download`;

    let somedata = await got(url, options).json();
    data.push(somedata);
    console.log(page);
}));

console.log(data);
var stream = fs.createWriteStream("video-data-parallel.json",{flags:'w'});
    stream.once('open', function(fd) {
    stream.write(JSON.stringify(data)+"\n");
    stream.end();
});

In this case, instead of a for loop, we’re using Promise.all and passing to it an array of page numbers that we loop over using the map function. When we run the script, we get output like the following:

You’ll notice 2 things:

  1. the script runs faster because the API calls are done simultaneously in parallel (asynchronously) rather than one after the other in serial (synchronously).
  2. the order of the output is no longer consecutive by page number. In this example, it was page 1, page 3, page 2.

Modifying the JSON Output Structure

As shown in the previous screenshot, the API call returns an object containing pagination info followed by a data array – an array of objects containing video info.

What if we just want the data objects and not the pagination info. We can do that by modifying the structure of the JSON output. We can replace

data.push(somedata);

with

data.push(somedata.data);

but then the output becomes an array of arrays.

To fix this, let’s flatten the array by adding the following code:

data = data.flat(1);

right before we console it out and write to file.

Now, the output file looks like this (each record is collapsed for visibility).

Filtering Out Certain Records

What if we want to filter out certain records, e.g. we want to filter out all videos that are not public, i.e. we only want videos where privacy.view = “anybody”. We can use the filter function to do that, like this:

somedata = somedata.filter(video => video.privacy.view === "anybody" );

Now, we only get 5 records instead of 6 because one of them had privacy.view = “unlisted”.

What if we want to exclude videos in the “Educational” category. We can do so like this:

somedata = somedata.filter(function (video, index, arr) {
    let isEducational = false;
    video.categories.filter(function (category, index, arr) {
        if (category.name === "Educational") {
            isEducational = true;
        }
    });

    if (isEducational === false) {
        return video;
    }
});

Now, the output is only one record.

Removing Fields From the JSON Output

Each video record can contain a lot of information, including information we don’t need. For example, the privacy object contains 5 keys.

If we want to return just one privacy key, say “view”, then we can do so using the map function as follows:

// simplify privacy object to just privacy.view
somedata = somedata.map(function (video) {
    video.privacy = video.privacy.view;
    return video;
});

For each video record, the “download” field is an array of objects, one for each available rendition (resolution), e.g.

If we only want to, say, return “hd” videos and only the download links, we can use two map functions like this:

// only include videos that are HD and only return HD video download links
somedata = somedata.map(function (video) {
    let download = [];

    video.download.map(function (size) {
        if (size.quality === "hd") {
            download.push({
                rendition: size.rendition,
                link: size.link
            })
        }
    });

    if (download.length !== 0) {
        video.download = download;
        return video;
    }
});

Now, the downloads array is simplified, like this:

The “categories” field is an array of objects with a lot of data, including objects and arrays of objects.

What if we want to simplify that to just a comma-delimited list of category names. We can do that like this:

somedata = somedata.map(function (video) {
    let categories = [];

    if (video !== undefined) {
        video.categories.map(function (category) {
            categories.push(category.name);
        });

        video.categories = categories;
        return video;
    }
});

Now, the “categories” field is much simpler.

Complete Code

For reference, here’s the complete code for get-data-serial.js. The page limit and per_page values can be updated depending on how many results you want.

import fs from "fs";
import got from 'got';

let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download,categories";
const options = {
    method: 'GET',
    headers: {
      'Authorization': 'bearer {access_token}'
    }
}

for(let page = 1; page <= 3; page++) {
    const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;

    let somedata = await got(url, options).json();

    somedata = somedata.data;

    // only include videos that are public
    somedata = somedata.filter(video => video.privacy.view === "anybody" );

    // only include videos that aren't in the "Educational" category
    somedata = somedata.filter(function (video, index, arr) {
        let isEducational = false;
        video.categories.filter(function (category, index, arr) {
            if (category.name === "Educational") {
                isEducational = true;
            }
        });

        if (isEducational === false) {
            return video;
        }
    });

    // simplify privacy object to just privacy.view
    somedata = somedata.map(function (video) {
        video.privacy = video.privacy.view;
        return video;
    });

    // only include videos that are HD and only return HD video download links
    somedata = somedata.map(function (video) {
        let download = [];

        video.download.map(function (size) {
            if (size.quality === "hd") {
                download.push({
                    rendition: size.rendition,
                    link: size.link
                })
            }
        });

        if (download.length !== 0) {
            video.download = download;
            return video;
        }
    });

    // simplify categories array of objects to just an array of category names
    somedata = somedata.map(function (video) {
        let categories = [];

        if (video !== undefined) {
            video.categories.map(function (category) {
                categories.push(category.name);
            });

            video.categories = categories;
            return video;
        }
    });

    data.push(somedata);
    console.log(page);
};

data = data.flat(1);
console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
    stream.once('open', function(fd) {
    stream.write(JSON.stringify(data)+"\n");
    stream.end();
});

Learn more about async/await.

Organizing HTML, CSS, and JS to Create Modular Website Components

Most websites contain the same or very similar layouts on multiple pages, e.g. header and footer. There also might be a few different hero section designs and a few different CTA section designs. Imagine having 10 product pages each containing three 2-column sections with a text description in the left column and a screenshot in the right column. Each of these product pages may also have a CTA section design but with slightly different text and links. It’s common to put shared CSS in a shared CSS file, e.g. shared.css, common.css, or global.css. This especially makes sense for the header and footer, which are usually shown on all pages. But over time, that shared CSS file can become very long because you may have a lot of CSS for many different common sections. This can make it difficult and dangerous to edit code for just one particular section. It can also make it very difficult if you want to copy a section on one page to add to another page. If the HTML, CSS, and JS for the section aren’t isolated, you may not copy all the necessary code, not to mention you could end up with inconsistencies between two or more sections that should have the same design.

Consolidating all CSS into the fewest files possible is good for website performance (the fewer files, the fewer network requests), but nowadays, it’s common for websites, including simple static websites, to go through an automated build process to optimize the files before publishing them. The build process can do many things like minify and combine multiple CSS and JS files into single CSS and JS files, add prefixes to CSS using tools like PostCSS auto-prefixer, etc.

Following is one simple approach to grouping HTML, CSS and JS by website section. This approach can also be used for any part of a website like blocks within a section, but to keep things simple, we’ll just look at sections which I define as horizontal rows of related content, e.g.

  • header section
  • hero section design 1
  • hero section design 2
  • features section design 1
  • features section design 2
  • resources section
  • CTA section
  • footer section

Here’s one possible file structure:

my-website/
├─ src/
│  ├─ index.njk
│  ├─ index.css
│  ├─ index.js
│  ├─ product1/
│  │  ├─ index.njk
│  │  ├─ index.css
│  │  ├─ index.js
│  ├─ product2/
│  │  ├─ index.njk
│  │  ├─ index.css
│  │  ├─ index.js
│  ├─ components/
│  │  ├─ header/
│  │  │  ├─ header.njk
│  │  │  ├─ header.css
│  │  │  ├─ header.js
│  │  ├─ footer/
│  │  │  ├─ footer.njk
│  │  │  ├─ footer.css
│  │  │  ├─ footer.js
│  │  ├─ section1/ (e.g. hero layout design 1)
│  │  │  ├─ section1.njk
│  │  │  ├─ section1.css
│  │  │  ├─ section1.js
│  │  ├─ section2/ (e.g. hero layout design 2)
│  │  │  ├─ section2.njk
│  │  │  ├─ section2.css
│  │  │  ├─ section2.js
│  │  ├─ section3/ (e.g. features layout design 1)
│  │  │  ├─ section3.njk
│  │  │  ├─ section3.css
│  │  │  ├─ section3.js
│  │  ├─ section4/ (e.g. CTA layout design 1)
│  │  │  ├─ section4.njk
│  │  │  ├─ section4.css
│  │  │  ├─ section4.js
├─ build/
│  ├─ index.html
│  ├─ index.css
│  ├─ index.js
│  ├─ product1/
│  │  ├─ index.html
│  │  ├─ index.css
│  │  ├─ index.js
│  ├─ product2/
│  │  ├─ index.html
│  │  ├─ index.css
│  │  ├─ index.js

In the src (source) folder, I’m using Nunjucks (njk) files instead of HTML files so they can include logic and pull in the components (partials). When the source files are processed, the built files show up in the “build” folder. For the home page source file (index.njk), the structure of the code could be like this

<html>
<head>

    {% include "/src/components/header/header.css" %}
    {% include "/src/components/footer/footer.css" %}
    {% include "/src/index.css" %}
</head>
<body>
    {% include "/src/components/header/header.njk" %}

    ... some HTML ...
    
    {% include "/src/components/footer/footer.njk" %}

    {% include "/src/components/header/header.js" %}
    {% include "/src/components/footer/footer.js" %}
    {% include "/src/index.js" %}
</body>
</html>

Note that the home page has its own CSS and JS files for elements that are not part of a component. When this file is built, the CSS and JS files will be combined (Netlify can do this automatically) and the included header and footer njk references will be replaced with their contents, e.g.

<html>
<head>
    <link rel="stylesheet" href="index-435kl3jl3j.css">
</head>
<body>
    <header>
        <p>FOO Website</p>
        <ul>
            <li>Home</li>
            <li>Product 1</li>
            <li>Product 2</li>
        </ul>
    </header>

    ... some HTML ...
    
    <footer>
        Copyright 2023
    </footer>

    <script src="index-43533lj344.js"></script>
</body>
</html>

Here’s another example. For product page 1 (product1/index.njk), the file contents may look like this

<html>
<head>

    {% include "/src/components/header/header.css" %}
    {% include "/src/components/section1/section1.css" %}
    {% include "/src/components/section4/section4.css" %}
    {% include "/src/components/header/footer.css" %}
    {% include "/src/product2/index.css" %}
</head>
<body>
    {% include "/src/components/header/header.njk" %}

    {% set title = "Product 1" %}
    {% set heroImage = "product1.jpg" %}
    {% include "/src/components/section1/section1.njk" %}

    ... some HTML ...

    {% set text = "Try Product 1 Now" %}
    {% set link = "/product1/free-trial/" %}
    {% include "/src/components/section4/section4.njk" %}
    
    {% include "/src/components/footer/footer.njk" %}

    {% include "/src/components/header/header.js" %}
    {% include "/src/components/section1/section1.js" %}
    {% include "/src/components/section4/section4.js" %}
    {% include "/src/components/footer/footer.js" %}
    {% include "/src/product2/index.js" %}
</body>
</html>

In the code example above, we’re passing some variables into components section1 and section 4. That allows us to reuse a component’s layout and design while changing its content. Since product pages usually look very similar, the code for product2/index.njk might look like this

<html>
<head>

    {% include "/src/components/header/header.css" %}
    {% include "/src/components/section1/section1.css" %}
    {% include "/src/components/section4/section4.css" %}
    {% include "/src/components/header/footer.css" %}
    {% include "/src/product2/index.css" %}
</head>
<body>
    {% include "/src/components/header/header.njk" %}

    {% set title = "Product 2" %}
    {% set heroImage = "product2.jpg" %}
    {% include "/src/components/section1/section1.njk" %}

    ... some HTML ...

    {% set text = "Try Product 2 Now" %}
    {% set link = "/product2/free-trial/" %}
    {% include "/src/components/section4/section4.njk" %}
    
    {% include "/src/components/footer/footer.njk" %}

    {% include "/src/components/header/header.js" %}
    {% include "/src/components/section1/section1.js" %}
    {% include "/src/components/section4/section4.js" %}
    {% include "/src/components/footer/footer.js" %}
    {% include "/src/product2/index.js" %}
</body>
</html>

I reused the components but changed the value of the variables that are referenced in the components.

To prevent code conflicts, you can specify an ID in the first element of each component. For example,

section1.njk

<div id="section1">
    ... some HTML ...
</div>

section2.njk

<div id="section2">
    ... some HTML ...
</div>

Then, in the component’s CSS, to prevent CSS conflicts, you can prefix all rules like this

section1.css

#section1 .intro {
    ... some CSS ...
}

#section1 .features {
    ... some CSS ...
}

section2.css

#section2 .intro {
    ... some CSS ...
}

#section2 .features {
    ... some CSS ...
}

Similarly, with the JavaScript component file, you can do something similar, e.g.

section1.js

$("#section1 .intro")...

section2.js

$("#section2 .intro")...

Another benefit of this approach is you can create a page showing a preview of all components you have. When you want to create a new page, you can browse the list of component previews to see if you can reuse an existing component or decide if you need to create a new component.

git cherry-pick Multiple Commits

Cherry-picking individual commits to merge them from one git branch to another is a very common practice. For me, I usually cherry-pick one or more commits from a develop branch to a master / main branch. The problem, however, is when I need to cherry-pick multiple commits made over a long period of time, like one month. Most git clients don’t make it easy to filter commits by keyword, and if they do, then they don’t let you easily select all filtered commits. So, back to the command line it is.

This post will describe one way to easily find all commits matching one or more keywords and extract their commit IDs on one line so you can quickly run the git cherry-pick command.

Step 1: Find and review filtered commits

We usually prefix our commit messages with an identifier so we can filter them amongst many other commits. For example, all commits to related to one project may have a commit message prefix of “LP:8929” where, in this case, “LP” stands for Landing Page. Sometimes, we might accidentally use an equal sign (=) instead of a colon (:). So, we’ll have one or more keywords to search commit messages when we filter all commits.

Furthermore, we usually cherry-pick commits from a develop branch to the master branch while we’re on master. The following command does the following:

CommandDescription
git logget the git log
master..developas we’re on the master branch, get the log from the develop branch
--onelineformat the output in one line (short format)
--grep='LP:8929' --grep='LP=8929filter results to only results containing either ‘LP:8929’ or ‘LP=8929’
-ido a case-insensitive search
git log master..develop --oneline --grep='LP:8929' --grep='LP=8929' -i
48b1cbcef51 LP:8929 - Fixed images path by using imagekit variable
8efa19678b3 LP:8929 - Fixed text alignment
3ab0954c725 LP:8929 - Updated banner and VS page on the landing page
bba58a1c5eb LP:8929 - Updated main heading of features section
fb56da3b134 LP:8929 - Updated content and created carousel as per the requirement
c7c6d50fbdb LP:8929 - Updated styling of the page
bc8a7fc38a1 LP:8929 - Updated icons of the comparison section
9a90ccb91ad LP:8929 - Updated text styling
a3bc0baf65f LP:8929 - Updated headshots and styling of the page
13977d2eb12 LP:8929 - Updated content as per the doc
7e589debba8 LP:8929 - First Pass: Created Landing page of Tenable v Qualys competitive page

Now that we have our list of commits, we need to review them to make sure they are correct. In our case, we primarily want to make sure the keywords are at the beginning of the commit message since that’s our commit message format.

Step 2: Extract commit IDs

If our list of commits is correct, then we need to extract the commit IDs. However, the git cherry-pick command requires the list of commit IDs to be in chronological order. By default, the git log output is in reverse chronological order. Here’s the updated command with some modifications.

CommandDescription
--reverseReverse the git log output to chronological order
awk '{print $1}'Extract only the text output in the first column (commit IDs)
tr '\n' ' 'Replace line breaks with spaces
git log master..develop --reverse --oneline --grep='LP:8929' --grep='LP=8929' -i  | awk '{print $1}' | tr '\n' ' '
48b1cbcef51 8efa19678b3 3ab0954c725 bba58a1c5eb fb56da3b134 c7c6d50fbdb bc8a7fc38a1 9a90ccb91ad a3bc0baf65f 13977d2eb12 7e589debba8

Step 3: Run git cherry-pick command

Now that we have our list of commit IDs in the correct order, we can copy it, type “git cherry-pick” and paste the list to create a command like the following.

git cherry-pick 48b1cbcef51 8efa19678b3 3ab0954c725 bba58a1c5eb fb56da3b134 c7c6d50fbdb bc8a7fc38a1 9a90ccb91ad a3bc0baf65f 13977d2eb12 7e589debba8

Hit enter to run your git cherry-pick command.

Creating a Frontend-only JavaScript App Using Svelte

Svelte is a JavaScript framework similar to React, Vue, etc, but with some fundamental differences that I think make it better. It’s also much more intuitive to understand as the web component structure of the code is dead simple. When building an app using Svelte, you’re encouraged to use Sveltekit, the official Svelte application framework. But, if you just want a simple frontend-only site/app, then you can just use Svelte (without Kit). In doing so, you can then easily take the built CSS and JS files and include them in any existing web page. Then, you just add a simple div

<div id="app"></div>

in your existing HTML file where you want the app go to.

Here’s an example.

Create a Vite Project

Vite is a front-end build tool and dev server. It includes Rollup to bundle and optimize assets. Run npm init vite and follow the prompts. As you can see in the screenshot below, I chose the default project name of “vite-project”.

This will create a folder named after your project and create some files and folders within it.

Then, as the instructions say, run the commands

cd vite-project
npm install
npm run dev

npm install will download the node packages specified in the package.json file into the node_modules folder.

Start Dev Server

npm run dev will launch a dev server. Running the dev script starts a program called Vite. Vite’s job is to take your application’s source files, pass them to other programs (including Svelte, in our case) and convert them into the code that will actually run when you open the application in a browser.

If you go to the local URL http://127.0.0.1:5173/, you will see the starter Svelte app that was created automatically.

The app includes a simple interactive counter. When you click the button, the counter increments.

src Folder

If you look at the src folder, you’ll see a few files:

  • The assets folder contains static assets, like images
  • The lib folder contains Svelte components. In this case, there’s the counter component.
  • app.css is a global CSS file for the entire app
  • App.svelte is your main app code. In this case, it imports the svelte.svg logo and the Counter component.
<script>
  import svelteLogo from './assets/svelte.svg'
  import Counter from './lib/Counter.svelte'
</script>
  • main.js is the main JavaScript file. It loads your app’s CSS (app.css) and your app’s main code (App.svelte). It will also load the app in the HTML element with ID app.
import './app.css'
import App from './App.svelte'

const app = new App({
  target: document.getElementById('app'),
})

export default app

Build the App for Production

When you’re done developing your app (creating components, etc), you can run a build. This will bundle all CSS and JS files into a single CSS and JS file, optimize the bundle (e.g. minify it), and output the production files in the dist (distribution) folder. Run a build with the following command.

npm run build

You’ll see a dist folder created. In this case, with the following files:

If you open index.html, you’ll see it references the two bundled CSS and JS files along with a div element with ID app where the app will be injected.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Vite + Svelte</title>
    <script type="module" crossorigin src="/assets/index-e83122bb.js"></script>
    <link rel="stylesheet" href="/assets/index-9ea02431.css">
  </head>
  <body>
    <div id="app"></div>
    
  </body>
</html>

Preview Built App

To preview the app from the built files, run

npm run preview

This will run a local server loading the built files.