Note: If you find this newsletter truncated by your email client, you can directly visit the published version here.
Hey,
Last Sunday I released a video: Raaodsn X-phi | A Space Journey. Do check it out. I only got a few feedbacks (all were positive). Some of the people I shared with left me in a cliffhanger (I waited for their feedback). Anyway, that's not the main topic. Probably, I should stop caring about appreciations at all!
This video is special. Partly because it was in my TODO list for almost 2 months. And partly because it's one of the few projects I frequently thought about, even while focusing on other tasks; whenever I scanned through my TODOs, it used to draw out a guilt of procrastination. So, 10ish days ago I got my shit together. I re-purposed my TODO list and prioritize this particular task.
I started refactoring the narration part. I broke the original writing into chunks for recording the audio. I segregated/combined the chunks so that narration wouldn't be too long/short in a single recording session. For every chunk, I ended up doing around 5-7 voice recordings. Plus collecting ~8 footages.
It took a toll on me. I'd sit for hours listening at my own recording (cringy), and record the same thing over and over. This was particularly challenging because I have a very shitty microphone that catches noises from the surrounding. Either I had to wait for noises to settle down (like wait for the neighbour’s dog to stop barking, wait for the kids outside to stop playing) or do audio post-processing which I suck at. (I am starting to get hang of audacity).
Regarding footage, I experimented with keywords to try to have resonating videos. Often, I'd deviate into some other random videos.
One night I ended up watching a 30-minutes video: Journey to the end of time. **Existential crisis stares at me** Haha…
I browsed mindlessly through ESA footages and pexels.com, that without attending to the original task. (See there's a pattern here...)
I refactored many things and still felt dissatisfied with them. In the end, I gave up on minor changes. Good enough was just fine. I did the final rendering which also has its own story. Still, the video is not that good! Oh also, it wouldn’t have been possible if I hadn’t switched my video editor from openshot to shotcut.
I am explaining all these in detail because our attention has a miserable property - it is rare! Deep focus is hard to achieve. I also want to convey that even a seemingly-simple 2-minutes video might take a long time to create. We often underestimate the time for completion (planning fallacy, more on to this later).
Enough of rants here! Let me share some of the contents I (re)consumed, especially the concept of the Closure Effect.
#Reading
[[Closure Effect]]
(See: Will Schoder’s Video for reference)
How often do you face a situation where you are doing a task X but you keep on thinking about another task Y? Even if you are not doing anything, do you think about them? Perhaps, you have recently watched a movie with an open ending and keep on thinking about possible theories? Perhaps, you just saw a short paragraph on the internet about the topic Z, and can’t help think about it?
These incomplete-yet-I-want-to-do-think-about things are Open Loops and a desire to close them is termed as Closure Effect (also Zeigarnik effect).
This is especially strong when you are actively doing something and get distracted abruptly.
These closures might also have good impacts. For instance, while trying to write my newsletters, I often create “loose” incomplete drafts. This allows me to constantly think about different related ideas (mostly undocumented), and later connect (continue) them. Ernest Hemingway used to do similar thing.
Another way I try to benefit is while playing guitar. I leave my guitar on bed so that it creates a frequent urge to play. (Also, random noodling without proper time signature is my bad habit! Joke’s on me). Most melodies I come up with are vague. Either I record them incomplete. Or try to connect with the old ones. (See this melody. The 2nd string broke in the intro. But I still know what I want to play once I fix that..)
Too many of these open loops are taxing and probably mentally unhealthy. So what to do about it?
One way to mitigate this is to convert some tasks into habits so that it'd be natural to perform on a regular basis.
Another way is to maintain a TODO list and prioritize the tasks (which I am clearly bad at). This helps in dispersing the cognitive load; we don't have to remember everything.
My Thoughts
I feel guilty of accumulating too many open loops. I stare at my TODO list for too long without actually taking a step to execute them. Eventually, long-standing incomplete tasks disappear into the oblivion (what a way to phase out of existence).
I can only focus on a few tasks at a time. I am not saying I have to choose every one of them from the list. But, sometimes they get on my mind for a long time. As a result, I can't focus on things with higher priorities. For example, I often think about problems from work after office hours which keep on mixing with personal goals. (Related: how do you stay motivated while working remotely?)
Beyond these personal scopes of the closure effect, I think it exists in society too. For instance, Nepal’s Melamchi Water Project has been in an incomplete state for decades now (originally started around 1998). Still, people are curious (and yes, of course, frustrated) about its state and shall be. Political elections are also another example of this effect. When elections near, news media allude us to think about politics.
More universal effect might be mysteries of the world. For instance, a mathematical problem centuries ago (say, Fermat’s Last Theorem) keeps on lingering from one century to another. A mathematician dedicates decades solving it….and…
(Figure: Attention-Insecurity-Guilt Chart)
The Lottery of Fascinations
Scott Alexander | slatestarcodex | 8 min
Here, Scott presents his guilt (I am not sure if this word represents the situation fully) of not being "math-savy".
I don’t know if it’s that I’m bad at math, or that I just don’t enjoy math enough to be intrinsically motivated to pursue it.
The main takeaway is not about the inability to perform well in a particular domain, but about the “fascinations” varying from domain to domain.
For instance, when you think of statistics, it might not be much fascinating when applying simple formula to excel sheets. But it might be worth something when working with machine learning models. (My cognitive bias towards ML :/ )
Another aspect of this is that we seem to tag a person with more technical skills as intelligent. However, this narrative fails when two-person of different domains excel in their own domain.
While Scott emphasizes that he is not good at math, it's clear that he is good at some other things and goes on to accepting "not being good at math".
My Thoughts
I am often bombarded by a similar feeling of not being good at "something". I feel guilty of having [[Shiny Object Syndrome]] (finding new things fascinating). The feeling might stem from not impacting the direction of the world directly.
I guess acceptance is the key to all of this?
I am not sure. If you had asked me "What are you fascinated about, Nish?" a few years back, I might have replied " Quantum Physics. Programming. Astronomy". Now, I don’t seem to have an answer. How the heck I should know? I find everything fascinating! I am guilty of not being able to grasp the passing time.
Asking the right question
Daniel Anderson | 9 min
This read might be a bit technical, so I am leaving out the details. But the main point is that sometimes re-framing a problem in a different way might give us a better understanding of the problem, and possibly the solution. This applies to any domain of life, not only technical.
Here, the author approaches the face recognition problem from various angles to analyze the performance of the machine learning model.
The Planning Fallacy
Anne-Laure Le Cunff | 6 min
The Planning Fallacy is a cognitive bias in which we underestimate the completion time of a task.
While we are able to recognise past predictions where we have been over-optimistic, we often keep on insisting that our current predictions are realistic.
One of the causes of this bias is our emotional attachment to our actions. We focus too much on minute details rather than on the bigger picture. As a result, we prioritize our beliefs more than the actual evidence.
Some ways to mitigate this might be:
Defining our priorities. (One effective way is using Eisenhower Matrix)
Questioning our actions and motivations. (Ask questions before you start outlining your plan.)
Are you planning on finishing a project by a certain date because you want to—because it would be the most convenient scenario—or because you are objectively convinced it can be done by then?
My Thoughts: Now, I am guilty of this fallacy too. Another existential dread added to my list. Nevertheless, prioritizing things definitely helps.
HN: How do you overcome decision fatigue in software development
In software development, there are a lot of decisions to be made: programming language, coding style, API frameworks, text editors, IDEs, software version, database… This list goes on. So, how do we cope with these decisions?
I think this is a genuine problem which has no solutions but only trade-offs.
The top discussion of this thread revolves around the idea of switching to Boring Technologies. These are older tech stacks that are more stable and have more long-term support. Unlike these technologies, new ones require you to constantly be up-to-date every other night (since they are actively developed).
I think it entirely depends on your use-case too. You can choose to stick with older technologies. But, if you need something new and improved, buckle up for the paradox of choices! (Too many choices kills the curiosity :/)
However, this fatigue is more nuanced while writing code. There is a ton of decision to make:
Naming the variables
Going functional or Object-Oriented Way
Different classes to use.
Tasks to modularize.
Algorithms to use
This list goes on. It's perfectly fine. This fatigue is inevitable in almost every domain, but it's more pronounced in Software Engineering because of small details/components that can affect the whole. (Coding is a mess!). If these stress you heavily, it's time to take a break. Rest. Go for a walk.
#Watching
Plagiarism Problem In Data Science
Ken Jee | 8 min
Plagiarism is a rising issue in data science because most of the problems/solutions have large overlaps (say same feature engineering, hyper-parameter tuning..). Without proper attribution, it's hard to know the authenticity of the code. What is weird is that most of the people don't even realize that they are using someone else's code.
So what can be done?
If you are the original author, reach out to the person who is plagiarizing your work.
If you are the person who is using someone else's work, cite/attribute the source!
On a personal note: I encountered the case of plagiarism with our open-source project playx last year. A person had copied the code from the repo, changed only the name (layx). It wouldn't have bothered me if I was the only person contributing to the codebase. But there were others whose hard work counted more than mine. So, I settled it by contacting GitHub. It'd have been less severe if the person had simply cloned the repo and followed the licensing agreement for the code. Anyway, Deepjyoti, one of the major contributors to playx, has a good write-up about this here.
Rocket Factory Tour - ULA
smartereveryday | 55 min
I enjoyed this thoroughly. I loved Destin’s curiosity and Tory Bruno’s humbleness (CEO of ULA).
The two important notes from this are:
(I) Even a minute detail (or error) can affect the efficiency of the rocket. If it's about humans onboard, it is magnified exponentially to life and death situations. For example, take the seemingly-simple task as welding a thin sheet of metal to form a rocket's fuel tank.
Normally, welding is done by heating and melting the joints. However, this is risky for a system like a rocket because this normal method leads to joints with significantly different quality than the rest of the structure such as the difference in heat conductivity.
So instead, uniform friction-based welding is done where metal plates are rubbed slowly to merge. This minimizes the change in properties at the joints.
(II) Simple geometric properties have a profound impact on the structure. For instance, the use of triangulation patterns makes the structures rigid.
(Rockets are awesome!)
Transformer Neural Networks Explained
CodeEmporium | 13 min
Disclaimer: Like anything else, I’m not confident enough to go in-depth. I only know certain abstractions of this network.
Transformer Networks have gained popularity in recent years, especially in the field of Natural Language Processing (notably due to the rise of GPT family). So, before jumping into these networks, it's good to know why its predecessor Recurrent Neural Networks failed.
RNNs are computationally expensive because it has a lot of hidden states, to compute contexts between words. This computation grows massively if the length of the sentence increases, plus when layers are added.
RNNs can only see one word at a time. Thus internal contexts are computed based on the sequence of those words (one-by-one). This is reasonable for shorter sentences. But, once the length increases, the relationship (and influence) between farther words are hard to capture. This directly relates to computational complexity from (1).
Transformer networks (in a very superficial way I am describing here) mitigate both the problem by seeing the whole sentence in a single go.
A sentence is fed as a whole instead of word-by-word basis. This makes it computationally cheaper to train.
Special Attention Heads are used to prioritize words (termed as “attending”) that influence the output. So, it is able to capture connections between farthest words.
On this note, I recommend reading Jay Alammar's "The Illustrated Transformer". It's a gem.
Art in the age of machine intelligence
Refik Anadol | 12 min | TED Talk
Damn. This is mind-blowing because real-life data is converted into surreal art. The last part of visualizing every TED Talk is epic.
#Fragments
Story: The Perfect Companion
A nicely written story that portrays an artificially intelligent bot as a companion...
Music: Pale Blue Eyes
Listening to this in the afternoon, under the sky… :)
Music: Femme Fatale
Wow. So beautiful! On repeat every night!
Loved the part around 1:02 by Pauline! Do check out other videos! They are lovely. The original is equally enchanting.
Music: For that second - Rob Scalon
This piece is serene! Rob never ceases to inspire me. I should probably improve my tapping too. I had failed in the past.
#Ending-Thoughts
I’ve been procrastinating on a lot of things. Half of these notes are from archives. I’ve been guilty of not doing anything despite knowing that I want to do something. If you have something interesting, please do share. I am locked up in my own mind-cave these days, unable to see the bigger picture, lacking motivation. This newsletter is the only thing that’s keeping the motivation up I guess…
…although I haven’t seen any growth of the newsletter (if subscriber counts/views/clicks matter)... Twitter would have helped in widening my knowledge base. I guess learning in public is the best way to grow symbiotically…)
I hope you enjoyed reading it!
Anyway, this Tweet from Tim Urban resonated well. I am the guy at the bottom, just sitting in the insecure canyon. :)
Take Care,
Nish