There are at least 3 major things that affect music audio quality:
Speakers
Audio source, e.g. mp3 file, FLAC file, etc
Digital-to-Analog Converter (DAC)
Some people may argue that transmitting audio over Bluetooth degrades sound quality, but the reality is the difference is so small that it’s negligible.
Speakers
Needless to say, quality speakers are necessary to hear music at a higher quality. Don’t expect to hear quality audio from cheap $10 earphones. Since I’m not an audiophile and I’m not interested in spending thousands of dollars just for speakers, I just have what I guess are prosumer speakers. Specifically, I have:
And since it makes no sense to buy them at full price, I buy them renewed on Amazon for a big discount because even renewed, they look and function exactly like they are brand new.
The WH-1000XM4 has a better sound stage, but it’s bulkier and leaks audio a lot. Also, it’s not great for working out because I feel it moves around too much and gets in the way of my workout. The WI-1000XM2 is compact, doesn’t leak audio, and can easily rest on my neck when not in use. The problem is when listening to music on my phone, the volume is often not high enough, especially when at the gym or when traveling by plane. This is where having an amplifier (amp) takes care of volume issues.
Audio source
I’ve dabbled with lossless FLAC files, but when compared to high-bitrate mp3 files, I personally can’t notice a big enough improvement to justify the cost and huge file size. I’m okay with mp3s as long as the bitrate is high enough. I normally just buy mp3s from Amazon Music. Don’t expect to hear quality audio from low-bitrate mp3s, though. The compression is too lossy.
Digital-to-Analog Converter (DAC)
Chances are you probably listen to music from your phone and sometimes from your laptop like me. The problem is the converters in them that convert digital audio signals to analog signals are likely of low quality. I have the Google Pixel 4a 5G smartphone. It’s a mid-range phone. But even if you have a high-end phone, the digital-to-analog converter (DAC) is most likely not as good as a dedicated DAC. Fortunately, there are small Bluetooth DACs that are lightweight and can clip onto your shirt. I tested the EarStudio ES100 MK 2 ($60 renewed, $80 new on Amazon).
When comparing the audio quality with and without this DAC, it’s clear that the DAC makes a decent, if not big, difference, depending on the song I’m listening to. The DAC is also an amplifier and can increase the volume to levels higher than I’d ever need it to be. It didn’t come with an aux cable, so I bought a short 4-inch one. The setup might seem complex, but it’s not that bad, especially if you’re just sitting for a long time, like on a long flight.
Instead of pairing your head/earphones to your audio source (phone, laptop, etc), you pair the DAC to it.
Though it has volume controls, I find it easier to adjust the phone from the phone app. It’s recommended to set the source volume (phone or laptop volume) to max and to adjust the analog (DAC) volume. The app has a lot of options and clear explanations, but I find the default settings to be sufficient.
Many website files include PDFs. These PDF files are usually much larger than other file types and can take up a lot of space. You may want to keep all website files like images and PDFs (binary files) together with your HTML, CSS and JS files (text files) and put them all in version control, like GitHub. But there are downsides to this:
Git version control is designed for text files, not binary. Even though you can use Git LFS so you can version your binary files, there are simpler, better alternatives.
Website images are better served from an image CDN like Cloudinary or ImageKit. These services will automatically optimize images on the fly.
PDF files are better served from a CDN. Amazon AWS S3 can be used to store your PDFs with versioning and AWS CloudFront can serve those PDFs from a CDN. With CloudFront, you can also write a function to redirect from one PDF file to another in case you need to delete a file.
The steps below describe how to set up AWS S3 and CloudFront to host PDFs and to set up redirects.
Note: you can create redirects using AWS Lambda functions (launched in 2017), but they are more complicated and cost 6 times as much as the cost of CloudFront functions (launched in 2021). Learn more.
1. Create an S3 bucket
Log in to the AWS console, go to S3, and click “Create bucket”. Choose a bucket name like “pdfs”.
Since you want people to be able to access the PDFs, uncheck “Block all public access” and check “I acknowledge that the current settings might result in this bucket and the objects within becoming public.”
If you want, choose the radio button to enable versioning
Ignore the other options, if you want, and the click the “Create bucket” button.
2. Upload PDFs
You can drag and drop your PDFs to upload them. If you have many PDFs, like thousands, then it’s better to use the AWS CLI S3 Sync command.
As a test, I just uploaded 2 PDF files/
3. Create a CloudFront Distribution
In the AWs console, go to CloudFront and click “Create Distribution”. For “Origin domain, choose the Amazon S3 bucket you created in step 1.
For the viewer protocol policy, choose “Redirect HTTP to HTTPS” since that’s a good policy IMO.
Ignore all other options, if you want, and click the “Create Distribution” button.
Now, the PDF files in your S3 bucket will be available in a CDN at the CloudFront domain provided, e.g. d2a5k3j4u1zr32.cloudfront.net/test-pdf-1.pdf
4. Create a CloudFront Function to Redirect Requests
Click on the distribution and then click on “Functions” in the left sidebar.
Click the “Create Function” button and enter a name for the function, e.g. “Redirects”.
You will see 3 tabs: Build, Test, and Publish.
In the “Build” tab, enter the code below and customize as needed.
Click the “Save Changes” button and then click the “Test” tab. You will see a field labeled “URL Path” with a default value of “/index.html”.
Since we don’t have a redirect rule for that URL path, we don’t expect any redirection to happen. Click the “Test Function” button. You will see output like below indicated that the response URI is “/index.html” as expected.
Now, change the URL path to one you have a redirect for. In my example code, I am redirecting “/test-pdf-2.pdf” to “https://www.google.com”. Click the “Test Function” button. The output shows “https://www.google.com”.
Now, publish the CloudFront function. Click the “Publish” tab, then he “Publish Function” button.
Click “Add Association” to associate the function to your distribution. Choose your distribution in the Distribution field. Leave Event Type as “Viewer Request” and ignore Cache behavior. Click the “Add association” button.
Wait for the function to be deployed. Go back to the function list page and check the status column. It will say “Updating” for a few minutes.
Wait a few minutes. Reload the page and the status should change to “Deployed”.
Now, test out the redirect in production by going to the CloudFront URL of a path you have a redirect for. You should see the redirect work.
Adding UTM parameters to links is useful for tracking marketing efforts, e.g. if you have a banner or an email with links to a landing page, you’ll want to know which method (banner or email) generated the most page visits and form fills. Google has a campaign URL builder that will generate URLs with UTMs for you. In Google Analytics, you can find pageviews to the landing page by UTM parameter. However, if you want to track any subsequent pages after the landing page, then you’ll need a way to pass the UTMs along to the subsequent pages. In my particular situation, I needed to pass UTMs to a 3rd-party site. The visitor flow would be like this
Click a banner on the home page of example.com. The banner has UTMs in the query string, e.g. example.com/landing-page?utmsource=home-page-banner
Land on an overview page on example.com, e.g. example.com/landing-page
Maybe visit other pages on example.com
Return to example.com/landing-page
Click a link to register for something on a 3rd-party site, e.g. foo.com/register
By default, only the first pageview of example.com/landing-page would include UTMs in the URL. To pass the UTMs to the link to the 3rd-party site, something extra needed to be done. I chose the following approach, which works well.
Write JavaScript code that runs on all pages.
If a URL contains UTM params, save the UTM name/value pairs as session cookies, overwriting any existing UTM cookies.
If a page has any <a> tags with the class “appendUTM”, then rewrite the href value by appending the UTM params.
I then added the class “appendUTM” to any links where I wanted to append the UTMs. In my case, it was the links to the 3rd-party registration site.
I recently had to move 35,000+ website images from Git to AWS S3. The images were in many subfolders. First, I had to separate the images from all other files. Then, when I tried dragging and dropping the parent folder containing all images to the AWS S3 web interface, I had to wait 9 to 17 hours.
When I woke up in the morning, I found the upload completed with errors:
Here’s how I easily separated the images from all other files and successfully uploaded all 35,000+ images.
Separate images from other files
First, I wanted to see a list of all unique file extensions so I could know what image file extensions were being used.
find . -type f | sed 's|.*.||' | sort -u
This returned a list like the one below.
JPG PNG ali bmp brs cnd CSS ...
Then, I copied the website root folder and made a new sibling folder called website-images where I’d just the images.
Then, I deleted all images from the “website” folder using the following command.
As mentioned earlier, uploading 35000 images to S3 using the web interface took a long time and kept completing with errors. What ended up working was uploading the images using the AWS CLI. Here’s how I did it.
I had to create an access key to authenticate. I created a new Identity and Access Management (IAM) user and then clicked the “Create access key” button to generate a new key.
I then saved those key values as environment variables. Here are the instructions. I basically ran the following commands in the terminal, replacing the values with my actual values.
For the default region, I chose the region for my S3 bucket.
Upload (sync) files
I then uploaded (synced) files from my local to my remote S3 bucket. Here’s the documentation for the S3 sync command. Since I had already uploaded some files, I was hoping to find a flag to skip uploading files that exist at the destination. It turns out that the “sync” command does this by default. I ran the following command in dry-run mode to verify the output path was correct.
Then, I reran the command without the dry-run flag.
aws s3 sync . s3://q-website-images/docs/
The command outputs a list of the files it uploaded.
When it was done, I tried rerunning the command, only to find that it completed with no output, indicating that all source files already existed in the destination. That was a sign that the sync was complete. Looking at the number of files in the S3 web console, I could see the correct number of files listed there.
Now that the images are in S3, I’ll use S3 as the origin for an image CDN (ImageKit). ImageKit will auto-optimize the images.
For me, my maintenance calories is currently 2650 calories per day.
Step 2: Calculate Calories & Protein to Lose Weight and Build Muscle
Losing Weight (Fat)
In order to lose weight by losing fat, you just need one thing: a net deficit of calories. But you don’t want too large a deficit because then you’ll lose both fat and muscle. You should target a deficit of 5 to 10% of your maintenance calories. You can lose weight by just consuming fewer calories or consuming more calories but burning extra calories by doing cardio exercises like running. Whether you just rest or you do cardio, your net calorie deficit should be 5 to 10% of your maintenance calories. For me, this value is currently between 2385 and 2517 calories.
Gaining Muscle
In order to gain muscle, you need 4 things:
a net surplus of calories
strength training until failure
sufficient protein consumption
rest (minimum 7 hours a day)
For the calories, you don’t want too large a surplus because then you’ll gain both muscle and fat. You should target a surplus of 5 to 15% of your maintenance calories. For me, this value is currently between 2782 and 3047 calories.
For the protein, you should target consuming 1 gram of protein for each pound of body weight. So, if you weigh 180 lbs, you should consume 180 grams of protein.
The calorie (energy / fuel) surplus is needed to rebuild the muscle you’ve broken down during strength training. Breaking down muscle fibers only happens if you train to failure. The large protein consumption is needed because muscles are made of protein. Muscle (protein) synthesis occurs while you’re sleeping, which is why it’s necessary to sleep enough after strength training.
Your weekly schedule would be a combination of resting days, cardio days, and strength training (weight lifting) days. Here’s an example.
DAY
ACTIVITY
CALORIES
Monday
Rest
Calorie Deficit, Extra Protein
Tuesday
Strength Training
Calorie Surplus, Extra Protein
Wednesday
Cardio
Calorie Deficit, Extra Protein
Thursday
Rest
Calorie Deficit, Extra Protein
Friday
Strength Training
Calorie Surplus, Extra Protein
Saturday
Cardio
Calorie Deficit, Extra Protein
Sunday
Strength Training
Calorie Surplus, Extra Protein
Step 4: Make a Meal Plan
When it comes to losing weight, you just need a calorie deficit, but you should consume healthy calories, e.g. no processed food, no added sugar, etc. For me, I try to stick to a keto diet, although that’s not absolutely necessary.
When it comes to building muscle, the hardest part will be trying to consume sufficient protein. If you weigh 180 lbs, you need to consume 180 grams of protein. That’s actually hard to do, which is why many people consume protein shakes to supplement their meals.
Here’s a list of protein-dense foods that can help you reach your protein consumption target.
Since the hardest thing is consuming enough protein, the meal plan below will focus on foods that will hit the target protein amount of 180 without consuming an excess of calories. If there is a calorie deficit, you can easily add any kind of healthy food to reach the calorie target.
For cardio, you can do anything from hiking, dancing, running, biking, etc. If you’re low on time, you can buy a recumbent exercise bike with resistance. It lets you lay back and exercise in a comfortable position. The one below is lightweight and small and costs $178. You can easily put it in your living room and use it while watching TV.
Playing certain types of music can be motivating and make exercising more enjoyable. Many people wear bulky on-ear headphones. I prefer in-ear neckband earphones because they don’t move around and are lightweight. They also block out ambient sounds pretty well. I wear the Sony Wireless Behind-Neck Headset (WI-C400).
Workout gloves
If you don’t wear padding gloves, you can easily develop calluses (thickened skin that forms as a response to repeated friction or pressure). Lifting weights is much more comfortable while wearing padded gloves.
Step 6: Count Calories
For calorie consumption, you can count calories by adding up all calories for each ingredient or food you consume. Look at the nutrition label on food packaging and/or look at online calorie databases.
To see how inaccurate a smartwatch measures calories burned, today I used both my Fossil Gen X watch with the Google Fit watch app to track calories burned. I also used the Polar H10 chest strap. I did strength training for 1 hr 10 mins. When I started tracking on my watch, I chose “indoor workout” and the phone app just started tracking calories, time spent, heart rate, etc. When I started tracking using the Polar H1 app on my phone, I was able to choose “Strength training” before the device started tracking vitals. Once I was done exercising, I stopped both apps. As you can see below, the smartwatch says “Run”, which I guess means it thought I was running on a treadmill. It also says I burned 482 calories. In the Polar H10 app on my phone, it says I burned 759 calories. That’s way more than 482, with a difference of 277 calories. While I was exercising, I had my Bluetooth earphones on. The Polar H10 app would send an audio message like “You are improving your fitness” or “You are burning fat”. It would say the former when I was doing strength training and the latter when I was resting.
Step 7: Measure Progress
Weight Loss Progress
Measuring your weight lost is easy. Just regularly weight yourself. To automate this, buy a wi-fi scale that record and keep track of your weight and show a graph of your progress on your phone. I personally use the Withings Body – Digital Wi-Fi Smart Scale with Automatic Smartphone App Sync. If weighing yourself every day, make sure to do it right before bed or first thing in the morning for more accurate results.
Muscle Gain Progress
To track your muscle gain, you’ll need to track your strength training weights, reps and sets for each exercise. Personally, I log my workouts using the free version of the FitNotes app. It’s a simple and easy-to-use app that just works. I can easily see my most recent reps and weights so I can either match or exceed them.
If you are able to lift heavier weights and perform more reps, then you must be building muscle, even if it’s not immediately noticeable in the mirror. You can also try measuring the circumference of different parts of your body, e.g. your upper arm, but that’s a hassle and inaccurate if your measure right after a workout when your muscles are swollen.
Over time, you can compare your strength training limits to see progress. Below is an example showing my actual results.
Google Analytics version 4 (GA4) is quite different than the previous version, called Universal Analytics (UA). GA4 is event-based, and the UI is quite different. If you’ve got a link with UTM parameters like
In GA4, if you go to Reports > Engagement > Pages and screens, you will see stats like pageviews for many pages. You can then filter to just one page like a free trial page by entering the page’s path in the search field, e.g. “/free-trial/”. You can then add a secondary dimension for source and medium. What you’ll end up will be something like this
This may not include the source and medium in your UTM parameters. A better way to get the traffic report based on a specific source and medium or name is by going to Explorations.
Here, you can create a new exploration. In the left “Variables” column
give the exploration a name like “Feb 2023 Campaign”
add some dimensions like
Page path and screen class
Session campaign
Session source / medium
add some metrics like “Views” and “Sessions”
In the middle “Settings” column,
drag some or all dimensions from the left column to the “Rows” field
drag some or all metrics from the left column to the “Values” field
In Google Earth Pro for Desktop, you can record a tour in real time by clicking the navigation controls or by clicking on saved placemarks. However, unless you are just moving from one point to another, the resulting tour may not be as smooth as you’d like. For example, if you have three placemarks, then as you click each placemark while recording the tour, the transition between placemarks will not be smooth.
To create a smooth tour that appears as if you are flying a plane or drone at a fixed altitude along a multipoint path, you need to create a path in Google Earth. Here’s an example. Let’s say we want to fly along the Las Vegas Strip.
Change Settings
Go to Tools > Options > Touring and change the settings as in this screenshot. Make sure to click the “Apply” button and the “OK” button to save your changes.
Now, click the “Navigation” tag and change the settings to match this screenshot.
Create Path
When adding a path, your mouse pointer will turn into a crosshair and you will need click to add points along your desired path. In this mode, you will not be able to zoom, change altitude, direction or pan by clicking on the screen because doing so would add path points. If you need to move around, you’ll need to use the navigation controls.
When creating a path, I find it easier to have your view facing straight down to the ground like this. In this example, my starting point will be just south of the south end of the Strip before the Mandalay Bay.
Click Add > Path
A dialog window will pop up. We’ll name the path “Las Vegas Strip”. Let’s also specify the altitude we want our flight path to be from the ground.
Click the “Altitude” tab.
Set Altitude to “400m” for 300 meters.
Make sure “Relative to ground” is selected.
Click to add path points
In the screenshot below, you’ll see that I created 3 points. From bottom to top, there’s 2 red points and 1 blue point.
Since I need to pan to the north to add more points along the Strip, I will use the navigation control up arrow to do so.
After adding all the last point (just north of the STRAT), I zoomed out to check the entire path. As you can see, all but the last point are red and the path curves to the right as it goes north.
Now that my path is done, I’ll click the OK button in the path dialog window. That adds the path to My Places.
Since I don’t want to see the white path line / curve, I’ll uncheck the checkbox next to the path name.
To play the path tour, just click the path Play Tour button, as shown below.
If you’re happy with how the tour looks, you can record it by clicking the “Record a Tour” button and then click the Play Tour button.
Then click Tools > Movie Maker to export the video.
In my 12 years working in Marketing, I’ve see a few different organizational structures at both low and high levels. And in my particular role, I’ve had to work with pretty much everyone, which has given me exposure to many issues that often go unnoticed. While there are many ways you can structure an organization, whatever way you come up should always make sense based on your particular organization’s needs. Following is a structure that I think makes sense based on my experience. The specific job titles (chief, VP, director, manager, senior vs junior, etc) are just an example. The important thing is that the structure and hierarchy groups people by function, commonality and importance. The structure below is for a 2000+ employee organization. Obviously, if your organization is much smaller or larger and has more or less dependencies on particular functions, some positions and groups can be removed or consolidated or even divided and expanded.
Level 1:
CMO (Chief Marketing Officer) or CPO (Chief Product Officer)
Level 2:
VP of Marketing
VP of Product
Level 3 and 4
Director of Content (or similar name)
This functional group primarily deals with marketing activities that involve text content. Since a big part of SEO involves text content, I put SEO Expert in this group.
Public Relations Expert(s)
Copyediting Expert(s)
Proofreading Expert(s)
SEO Expert(s)
Social Media Expert(s)
etc
Director of Design (or similar name, e.g. creative, etc)
This functional group primarily deals with marketing activities that involve visual design. Brand has to do with a company’s external public image, which relates to both public relations and design. Depending on your preference, this role could be under this “design” group or the “content” group above.
Graphic Designer Expert(s)
Web Designer Expert(s)
Brand Expert(s)
Video Expert(s)
UI / UX Expert(s)
etc
Director of Web (or similar name)
This functional group primarily deals with marketing activities that involve websites. Due to the criticality and complexity of today’s websites, significant dependencies that all functional groups often have on a company’s website, and unique technical skills that members of this group have, I made this group a standalone group rather than a subset of another group. Also, since email marketing is very effective, and because HTML emails contain code like that used on a website, I put HTML Email Expert(s) in this group. Though many marketing automation tools like Marketo include an email builder tool, I’ve found that they are limited in features and produce emails that don’t look professional unless the user has web development skills.
HTML, CSS Expert(s)
JavaScript Expert(s)
WordPress Expert(s)
HTML Email Expert(s)
etc
Director of Marketing Operations (or similar name)
This functional group primarily deals with marketing activities that involve marketing automation tools like Marketo, revenue attribution tools like Bizible, data analysis and reporting tools like Tableau and Google Analytics, customer relationship management tools like Salesforce, lead processing and routing, etc.
Marketing Automation, e.g. Marketo Expert(s)
Customer Relationship Management (CRM), e.g. Salesforce, Experts
Reporting and Data Analysis Expert(s)
etc
Director of Channel Marketing (or similar name, e.g. demand generation)
This functional group primarily deals with marketing activities that fall under certain channels like events, e g. virtual or in-person conferences, partner marketing, etc, and demand generation activities like advertising campaigns such as Google Pay Per Click (PPC), email marketing, digital media marketing, print advertising, etc.
Event & Event Marketing Expert(s)
Partner Marketing Experts(s)
Google PPC Expert(s)
Campaigns Expert(s)
etc
Director of Product (or similar name)
This functional group primarily deals with product management and product-specific marketing. For example, many tech companies have multiple products. Each product requires a specific subject matter expert, as it may be uncommon to find someone who is an expert in multiple products. Each product expert (commonly called “product manager”) basically is responsible for their own product, from understanding their product’s customers’ needs, making product feature decisions, helping market a product, e.g. by writing product page content, product-related blog posts, giving product-related webinars, etc.
Product Expert for Product A
Product Marketing Expert for Product A
Product Expert for Product B
Product Marketing Expert for Product B
etc
Project Manager
There is one role that doesn’t quite fit in any of the categories above. Needless to say, many marketing activities require the collaboration of multiple functional groups, but each of these groups specialize in their own areas. There’s no such thing as a person or group who specializes (or is even interested) in everything. One big issue I often see is how certain marketing activities with a specific hard deadline, e.g. due to an earnings release or predetermined and pre-marketed event taking place, results in chaos, with many people working nights and weekends due to lack of planning, ownership and project management. That’s why there’s a particular title called Project Manager (PM) or Project Management Professional (PMP) and a certification where people can get a PMP certificate. These people don’t specialize in the various components that go into a project, but rather they keep a project on track to avoid delays, mistakes, oversight, etc. Sometimes, they have the boring task of having to keep reminding people to do their part to avoid delays caused by blocking subsequent tasks. Assigning a random person to be a temporary project manager may work for small projects involving few people, but for large projects involving many people, a “real” project manager who actually specializes in project management is needed.
Title Hierarchy
Here’s a title hierarchy commonly used in many companies.
Personal opinion: the wave pool isn’t impressive. There’s only one wave every few minutes. The lazy river is small, but it has a strong current, which is nice.
Personal Opinion: This is a relatively small waterpark compared to the ones I’ve tried in Orlando, FL. However, it wasn’t crowded, which meant lines were short. There was a decent variety of rides and they were all fun. The wave pool creates large waves every 5 minutes and they last for maybe 5 minutes. Definitely a lot of fun as the strong waves can push you towards the beach. The lazy river is of decent length, however, the current wasn’t very strong.
Take free tram to Luxor and Excalibur or walk through the enclosed walkway
Luxor
Opened in 1993
4400 rooms
Take pictures of Egyptian architectural theme
Titanic Artifact Exhibition Apr 27 – Sep 4 | 11 AM – 8 PM; Last admission 7 PM Sep 8 – Nov 12 | 11 AM – 6 PM; Last admission 5 PM Nov 13 – Dec 31 | 11 AM – 8 PM; Last admission 7 PM $32
Bodies… The Exhibition Apr 27 – Sep 4 | 11 AM – 8 PM; Last admission 7 PM Sep 8 – Nov 12 | 11 AM – 6 PM; Last admission 5 PM Nov 13 – Dec 31 | 11 AM – 8 PM; Last admission 7 PM $32
See Cirque du Soleil’s Mystere FRIDAY – TUESDAY* 2 Shows | 7 p.m. & 9:30 p.m. Arrive 30 mins early for pre-show entertainment Actual performance time is approximately 90 mins $64 – $135