Add your own title and intro here (Change this under Settings -> General -> Tagline)

Category: Digital Literacy (Page 1 of 2)

This is a parent category for the Digital Literacy course. Please add this category in addition to the relevant assignment category.

Final Summative Product

Terminal Tools That Changed My Workflow

During this term, I’ve been learning tools that improve how I work in the terminal. This post is a summary of what I learned from using Tmux, customizing Vim for writing, and going deeper into Vim/BASH features. For my workflow, learning these tools reduce cognitive overhead and make my workflow faster and more efficient.

Tmux – Why I Use It Now

Before using Tmux, managing multiple projects and terminals was frustrating. I would have to open multiple windows, manually navigate to directories using cd, and I would have to remember exactly what I was working, which is a big deal when working on multiple projects at the same time.

Tmux solves that.

With Tmux I can:

  • Create a session for each project.
  • Save the exact state of that session.
  • Resume exactly where I left off, even after a reboot.

This means:

  • I don’t have to remember what I was doing or in what directory.
  • I don’t have to navigate to directories every time.
  • I can close everything without worrying about losing progress.

Tmux became part of my daily workflow after only a few days of use, because of how much cognitive overhead it reduces.

Making Vim Work for Writing

I originally learned Vim for editing code, but this term required a lot of writing. I didn’t want to use a separate word processor, for several reasons.

So I turned Vim into one.

Vim is a keyboard-based editor with different modes:

  • Normal: Navigate and run commands
  • Insert: Type text
  • Visual: Select text

Using Vim for everything reduces mental overhead. I don’t have to learn multiple apps with different shortcuts.

I wrote a custom command WordMode that makes Vim better for writing:

  • Wraps lines and adds line breaks
  • Enables spell check
  • Smart indenting
  • Light background
  • Remaps movement keys to respect visual lines

Now I can comfortably use one editor for everything—coding, writing, presentations.

Vim and BASH – Advanced Features

I also spent time learning features in Vim and BASH that I hadn’t used much before.

Vim Capture Groups

Vim allows capturing parts of text using regex. This is useful for:

  • Editing repeated patterns
  • Swapping or reordering parts of a line
  • Making batch edits

These features are powerful and eliminate the need for manual editing.

BASH: xargs, ls, tr

Xargs is a very useful command if you need to pipe multiple arguments into another program. To illustrate xargs use consider a practical example: I often search through PDFs when studying. I used to combine files like this:

pdfunite *.pdf all.pdf

But this merges files in the wrong order. A better way is:

ls -tr *.pdf | tr '\n' '\0' | xargs -0 pdfunite all.pdf

This:

  • Sorts PDFs by time
  • Handles filenames with spaces
  • Builds a clean pipeline using UNIX principles

“Write programs to do one thing well. Write programs to work together.”

Final Thoughts

The motivation for my inquiry was to fill the knowledge caps in my workflow which is terminal-focused. I wasn’t using some basic but essential commands like xargs or syntax like $(command). I also wasn’t using many powerful features in Vim like macros, capture groups, or navigation with t and f. I didn’t have a good solution for terminal multiplexing either, adopting Tmux fixed that and gave me a much more organized and efficient workspace. This project gave me the opportunity to dive deeper into the tools I use on a daily basis, and think about what else could be improved and expanded on as well.

Plans to further develop my setup:

  • Git integration using the vim-fugitive plugin
  • Deep dive into Vim’s quickfix lists
  • Create custom remaps for Tmux

These changes are helping me build a focused, minimal setup.

Week 8: Curation & Annotation

Exploring Zotero & My Opinion on Annotations

This week we learned about Zotero for curation and citation, and also about the function and importance of annotation.

Curation

I would say that I am a minimalist when it comes to certain technologies, but I am fast to adopt technology that saves me from doing repetitive tasks. One of the most dreadful things experienced in school is finding and formatting references for a paper by hand in MLA, APA, etc. Traditionally if I needed to create references for a paper I would input them into LaTeX first, and that would at least save me from formatting the lines by hand, but I would still have to manually enter all of the information into latex, which is still time consuming and tedious. A wonderful app that fetches all metadata (Author, doi, title, etc) about an article and also formats it into APA, MLA, etc is Zotero. I have been using this app during this semester for some of my classes I can say it’s an absolute game changer for finding references and saves a lot of time. It also accounts for many workflows and technologies that people use like Microsoft Word, LaTeX (BibTex), and HTML for websites. Needless to say, I have adopted this app into my academic app stack and plan to use it in the future.

Photo by Michael Dziedzic on Unsplash

Annotation

Personally, I have never really found annotation that useful and I only annotate on things that are temporary. The reason I typically don’t like annotating is that I find it clutters the pages of books and is just distracting from the content being read. I find annotations in books too permanent, the nature of exploring ideas is that they change over time, and that they are iterated on and refined. I have always found even highlighting and writing in the margins to cage my thinking rather than exploring and discarding ideas along the way, in an iterative process. Instead of annotating I would rather just take notes while reading. The way I have found annotations to be useful is on temporary things that are only relevant for a short period of time, like schedules, course outlines, and rough drafts of papers.

Exploring Vim & BASH advanced features

Exploring Vim & BASH advanced features

In this blog post I illustrate how I have been pushing my knowledge in Vim and BASH by using some of their most powerful features.

tl;dr:

  • Groups can be captured in Vim using the syntax \(<example group>\) which are useful when modifying, adding, and changing text around a pattern that is meant to stay in tact (group).
  • Xargs is an essential and probably one of the most important BASH commands. It allows for powerful piping, and combining commands, which when used together become sophisticated and powerful.

Vim Capture Groups

One of the most useful features in Vim is capture groups. I will be honest, I have known about capture groups for a while but I never pushed myself to learn them, that is why for this weeks inquiry I wanted to get a grasp of this feature. To illustrate what capture groups are, I will explain different use cases for them, and give a visual example of them at work.

Common use cases

Adding text around a group to change how it functions within a program.
Changing the structure of multiple lines.
Swapping, or re-ordering multiple groups.

This video illustrates some of the examples that I have already encountered in my date-to-day work, illustrated through examples. (ignore the last 1 minute of the video, there was a glitch when combining the two videos)

BASH – ls, and xargs commands

The Issue

This term I have been using a lot of BASH commands when accessing and organizing information for my various classes. To illustrate this, consider an example: It is common for me, and I assume other students to be searching for information that is contained in a specific pdf file but unsure which pdf that information is contained in. There are probably many different approaches to this problem, however the way that I usually solve this is combining all of the pdf files into one then searching for keywords.

A Rudimentary Solution

The way that I would traditionally accomplish this would be with a command like pdfunite *.pdf all.pdf however this leads to an issue where the files are out of order, this solution somewhat solves the issue but it could still be better and more robust.

A Good Solution

It would be nice to be able to sort the pdf’s by an identifier, or metadata like the time/date the files were downloaded, or some other parameter. This is not immediately obvious how to accomplish, however incidentally I came across the xargs command (“build and execute command lines from standard input”). Which allows this kind of sorting to be done before passing the file names to a program. In the improved command we can first sort all of the pdf’s in that folder from oldest to newest which would be the order that the pdf’s were released from the prof, then pass that sorted list of files to pdfunite and replace \n to \0 in case the file contains spaces. -t means sort by time -r means reverse the ordering.

ls -tr *.pdf | tr '\n' '\0' | xargs -0 pdfunite all.pdf

Another use I have found is for music being in order from an unsorted album, each song is its own file. We can apply a similar command to ensure that we play the album in the correct order. The -c flag means sort by time the file was modified. Therefore this will sort the files by time they were downloaded.

ls -trc *.mp4 | tr '\n' '\0' | xargs -0 mpv

If you have any bash commands that you find useful I would love to know, thanks for stopping by!

“Write programs that do one thing and do it well.”
“Write programs to work together.”
“Write programs to handle text streams, because that is a universal interface”

https://en.wikipedia.org/wiki/Unix_philosophy

The Cost of Freemium Software

Bonnie Stewart this week brought up some very important points to consider. In this article I will talk about some of the topics she brought up, and also share some other considerations.

Value Exchange

My take is that, data collection as a whole is inhumane, malicious, and parasitic. The logical conclusion of this practice, is a form of slavery, because we relinquish our power to monopolistic companies, and that power is then used against us, in the form of advertisements, surveillance technology, and selling that data to other companies. Its true that these products offer a lot of value at no monetary cost, but the value exchange is not equal. I think more people are becoming aware of this exploitative relationship and are starting to use privacy respecting software more and more, however most people are still not aware of the dangers. There are also people that are aware of the consequences but conscientiously choose digital convenience over sovereignty, and the allure of the former is strong. A few common justifications I have heard and have also used myself are, “Well, I’m not doing anything wrong online therefore I don’t care if my data is collected”, “I’m too invested in the ecosystem of this technology to switch”, “How is my data going to make any difference when there already so much data out there already?”

“Anonymous” data

One of the most dishonest change of phrase is Anonymizing Data, there is no such thing as anonymous data because all data can be de-anonymized. This phrase is used purely to mislead people into thinking that their data is somehow safe because it is anonymized. Even if the data was anonymous its still used in nefarious ways.

Photo by Mika Baumeister on Unsplash

Alternatives

The exploitative nature of these companies has created an entire market for privacy respecting alternatives, some monetarily free and some paid. These alternatives in a lot of cases are better than the big tech products. They are faster, less buggy and give a sense of freedom that ones data is not being harvested. I am pleased to see that this class uses open-source alternatives, this is important to show that the alternatives are actually good and viable.

Conclusion

It is now clearer than ever that data collection is exploitative, and the outcomes of this practice are not good. I think it is important for people to be aware of the dangers and make a decision on what software they want to use, and to consider the privacy respecting alternatives. There should be more diversity in the eco-system of technology to avoid and break up monopolies that currently exist. I also think that governments should intervene and abolish the practice of exploitative data collection.

Online Disinformation – SIFT method

It is important to practice a level of skepticism and critical thinking when consuming information online. Mike Caulfield in this weeks material introduces the SIFT method, which is a framework people can use to classify information as potentially a good or bad source. In this post I will talk about the novice-expert problem, how our perception of content can skew how we think about it, and how information online is much more personalized compared to plain text.

How I understand the novice-expert problem is that, a novice in a subject will likely accept plausible sounding explanations for something even if incorrect (using heuristics, or saying “ehh yeah thats probably true”). While an expert will be quick to identify issues with an argument presented to them. Being aware of some of the inherent limitations that people can face can help illustrate to ourselves that we should exercise critical thinking more than we may think we need to.

A number of other metrics can also skew our perception of online media. Consider the concept of “preselection”, this is manifested in metrics like views, likes, and comments. If a piece of media has high metrics, then this piece of content is “pre selected”. Because other people have given their approval of it, giving it online authority. This phenomenon could lead the viewer to believe that the information is trust worthy.

It has always been important to exercise critical thinking when presented with ideas or information, this has been true before the internet existed and is still true today. One thing that makes the internet different than reading plain text is the personal quality that is present in online material. What I mean is that, content on the internet is not just text, but also video, audio, and image. I argue that it would have been easier to be objective about information when it was represented in plain cold text. These other qualities like video, etc., I think can make it harder to be objective because of the inherent personalization of the information being presented.

In a sea of information, influencers, algorithms and online opinions, it can be hard to separate the signal from the noise. Fortunately, there are tools like the SIFT method that can give us more context about a source and make better decisions about information online.

Vim as a Word Processor

I started learning Vim/NeoVim a few years ago in order to become faster at text editing, mainly for coding. However this term I am in a lot of classes that require writing, so for this blog I wanted to set up my Nvim to behave like a word processor.

Vim as a Word Processor demo

Back ground of Vim

Vim is a simple, terminal based text editor, centered around keyboard navigation rather than, mouse navigation. When using Vim the mouse is not used whatsoever. This is achieved through Vim modes. The most used being:
Normal mode: When the user presses <Escape> they enter normal mode. When in normal mode, is when the user can navigate the page or enter commands. Insert mode: Pressing <i> enters insert mode, once in insert mode, the user can insert text. Visual mode: Pressing <v> enters visual mode, which allows the user to select text. Vim uses grammar oriented shortcuts: For example <2ft> this would translate to “second forward letter t” -> “move the cursor to the second t”

Why would I want to use Vim as a word processor rather than just using a word processor?

Mainly because editing text with a mouse is cognitively taxing and not fun. Vim allows me to edit any text in an efficient, fast and fun way, regardless of the file type. I can use it for coding, writing English, and creating presentations. This unified approach reduces cognitive overhead immensely because I do not have to learn new applications that have different shortcuts, I can learn one set and get really good at that set of shortcuts, and I never have to reach for the mouse when I am working in Vim!

How I made Vim into a word processor

Since Vim/NeoVim is open source it is very easy to hack and customize, this made it very easy to create a command called WordMode that turns the editor from a code editor into a word processor. I will not explain this function, but if anyone is interested in adding this to their NeoVim setup, the code will be included at the end.

Conclusion

Creating a WordProcessor mode in Vim has been a game changer for me, because I used to find my self getting stuck while writing and not knowing what to say. Vim allows me to get my ideas out fast, which allows me to flow smoothly while writing.

sources

Download:

https://github.com/neovim/neovim
https://www.lazyvim.org/

Other

https://en.m.wikipedia.org/wiki/File:Neovim-logo.svg

-- Define the "Word Processing Mode"
local function word_processor_mode()
  -- Set format options for text formatting
  vim.opt_local.formatoptions = 't1'
  -- Set text width to 80 for line wrapping
  --vim.opt_local.textwidth = 80
  -- Map 'j' and 'k' to move through visual lines instead of wrapped lines
  vim.api.nvim_buf_set_keymap(0, 'n', 'j', 'gj', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'n', 'k', 'gk', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'n', '0', 'g0', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'n', '$', 'g$', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'v', 'j', 'gj', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'v', 'k', 'gk', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'v', '0', 'g0', { noremap = true, silent = true })
  vim.api.nvim_buf_set_keymap(0, 'v', '$', 'g$', { noremap = true, silent = true })
  -- Enable smart indent
  vim.opt.colorcolumn = ''
  vim.opt_local.smartindent = true
  vim.opt_local.spell = true
  vim.opt_local.wrap = true
  vim.opt_local.linebreak = true
  -- Enable spell checking with English (US) dictionary
  vim.opt_local.spelllang = 'en_us'
  -- Disable tab expansion to spaces
  vim.opt_local.expandtab = false
  -- Set background to light mode
  vim.opt.background = 'light'
  vim.opt.scrolloff = 1
  -- require('zen-mode').toggle()
end

-- Create a command to trigger "Word Processing Mode"
vim.api.nvim_create_user_command('WordMode', word_processor_mode, {})

The Good and Bad of AI

In this post I would like to delve into my experience with using AI, and how this new technology can be leveraged or misused.

The Good and bad of AI

The use of AI is becoming ubiquitous with students, professionals and everyday people in their work. The problem is that this technology is very easily misused, understanding the strengths and weaknesses of AI can help people leverage this technology to reinforce their work, and learning, without interfering with it.

AI similar to a tutor

As a student it can be easy to become bogged down when stuck on a problem. Traditionally this would be overcome by either asking a professor, or hiring a tutor in order to quickly learn a concept. One issue with this traditional approach is that professors do not have enough time to tutor each student and hiring a tutor can be expensive. An effective use of AI is to use it almost like a tutor, doing so can help get past the feeling of being stuck, which can make learning easier and faster. Even if the AI spits out an incorrect answer, encapsulating the idea into writing can sometimes be enough.

Drilling and example problems – active recall

Another good use of AI is drilling for tests. Using AI to create sample questions, to learn, and memorize concepts and methods to solve problems.

Search engines are basically useless

I have talked to many people who have noticed a pattern with modern web search… that it sucks. I personally agree that modern search engines are at best, kind of useful and at worst useless. What I mean by this, is that it is impossible to find real information by real people on the internet anymore. All of the top web results are “Top 10 blah blah blah of 2025”, reddit posts that never answer the question asked, or obscure technical questions from a decade ago that never reach an answer. Also a lot of the top search results are copied and pasted slop that has no real information it is just fluff. The goal of a website is no longer to document and distribute information that a person may find interesting or have expertise in, the goal now is to maximize time spent on their website in order to increase ad revenue, which equates most of the time to wasting the users time.

GenAI as a search engine

To avoid wasting time trying to search the internet for something specific, GenAI can aggregate information from the internet that is relevant to the users query. This may be a more efficient way to gather information from websites, to only receive relevant information, rather than fluff.

The old fashioned way is the best way to access real information

Probably the best alternative to both searching the web and using AI, is to just read the text book. This is the slower but reliable route, reading the text book is less frustrating and in the long term is likely a better resource than both the internet and AI.

Offloading too much to AI

One danger of using AI is that the user will offload too much thinking to the AI. Firstly doing so robs the user of any learning, and also can alter the way we critically think about a problem, by turning off the critical thinking part of the brain. For example, consider the Einstellung effect, when a person sees the answer to a problem, they become unable to think critically to come up with their own solution. This effect is quite common if AI is misused.

AI causes more confusion and distracts from the real solution

One issue with AI is that it is marketed as an all purpose tool that is capable to help with any subject. This is not true, and the AI will confidently give false information, which can lead to confusion and frustration. I would say the best thing to do is to lower ones expectations of AI and use it sparingly.

Conclusion:

GenAI can either be leveraged to aid in learning, or impede it. Learning how AI works and its limitations can give users a better perspective on what AI is good for and what it is not. The current state of AI is no where near equal to the capability of the human brain, for the best outcomes people should not lean on artificial intelligence but use it as a tool, and embrace on our own Human Intellegence.

https://unsplash.com/photos/man-drawing-on-dry-erase-board-7lryofJ0H9s?utm_content=creditShareLink&utm_medium=referral&utm_source=unsplash


https://unsplash.com/photos/opened-book-QJDzYT_K8Xg?utm_content=creditShareLink&utm_medium=referral&utm_source=unsplash

CLI Tools: TMUX – Terminal Multiplexer

As a programmer, it is expected to spend a significant time using UNIX shells. However out of the box the vanilla UNIX experience can be clunky and inefficient. There are many terminal based tools (CLI tools) that address this issue, one being Tmux. I have been using Tmux for only a few weeks, but its been so useful that I have already integrated it into my workflow. In this post I will go over what I personally have found useful about Tmux, and why its worth considering if you use the terminal frequently.

“(tmux — terminal multiplexer) tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached”. – TMUX man(1) page

The main benefit of Tmux is that it minimizes cognitive overhead by containerizing terminal sessions, to illustrate this, consider an example: You’re working on 3 programming projects and for each of those projects you’re actively working on 3 files, plus a shell for testing/running code. That’s already potentially 12 windows that you would have to manage on a daily basis if you were not using Tmux. Not only would you have to manage these windows on the screen but you would also have to navigate to that location everyday, this would be tedious and a waste of time.

Here is the Tmux indicator, showing that I am connected to the session “mysession” and there are three windows numbered 1,2,3. 1:nvim is indicating that window 1 has nvim open.

Now, consider the same example but with using Tmux. You can create a Tmux session for each of the projects you’re working on. After creating a session for each project, the current state of each session is saved, even after reboot. This saves time and frustration by: not having to think about what you were working on over a long period of time (potentially months working on a software project), not having to repetitively navigate to what you were working on, and acts as a safe guard against unexpected reboots like a power outage.

Consider the following two videos. The first is a workflow without using Tmux, the second is one using Tmux.

This is the manual way of “cd’ing” into a directory, without Tmux. Very repetitive and wastes time.
Using Tmux simple as running tmux attach -> C-a + o -> select session -> returns to what I was working on.

The way that I have been using Tmux, is to have a session for each of my classes. This allows me to work on muliple written assignments at once without having to remember exactly what I was working on, I can just attach to the session and continue working. This will be a game changer for me, because I was wasting a lot of time before just using cd and lf to navigate files.

The following posts will explore more CLI tools, and a deep dive into NeoVim. Thanks for stopping by!

Resources:

https://github.com/omerxx/dotfiles/tree/master/tmux

https://upload.wikimedia.org/wikipedia/commons/e/e4/Tmux_logo.svg

Open-Source as the standard

One problem that arose when the internet became mainstream was that, sharing information was difficult because of copy right laws. One of the major goals of the Creative Commons licenses was to enable people to share their content online by releasing their content under an open license. This revolutionised how people used the internet, because now, people could freely access and distribute information at nearly no cost, without having to worry about copy right laws. There has been movement towards open source, both in software and information, however corporations and tech giants are reluctant to contribute to open source, especially if it would compromise profits. I argue that open source should be the standard not only for institutions like those that are funded by tax money, but also massive corporations and monopolistic tech giants.

It is ironic that there is nothing open about “OpenAI”, their AI models are closed-source, and they are moving from non-profit to for-profit. Contrast this with the communist Chinese release of the most innovative and advanced, free and open-source AI model to date, DeepSeek R1. Cable Green states that “In order to solve big problems information must be open” I agree with this statement. This point is evident when considering, open access to Covid-19 studies and information lead to rapid vaccine development, open curriculum’s saves schools and students tens of millions per year, open source software like Linux is the backbone of the entire internet, and open source AI models are leading in innovation. DeepSeek R1 became the most downloaded app on all app stores, ChatGPT at number two. It has effected American markets (NVidea lost ~one trillion), forced OpenAI to release a better free model, and demonstrated that the future of AI is open.

The open source movement is necessary for making rapid progress in the scientific field, therefore the standard should be open. However some of the biggest players are holding society back from making open source the standard. With that being said there has been massive innovation in the open source movement, yet there is a lot of work to be done.

https://unsplash.com/photos/macbook-pro-on-top-of-table-vSchPA-YA_A

Inquiry Based Learning and Direct Instruction

Teaching, and subsequently learning is not a perfect science, there are flaws with any system of teaching. In this post I will describe potential issues with the direct instruction style of teaching, and why I think it is the most popular mode of teaching. Then I will describe an alternative teaching method which is based around inquiry, and its strengths and weaknesses.

This week we learned about inquiry, a couple of things that stood out for me in Week 3: Inquiry Process, & SIFT Methodology was the contrast between teaching methodologies. The direct instruction system is setup such that students are expected to do well on exams and assignments, but the means to achieve the ends, are not accounted for when graded. Essentially the teacher gives the student material and it is in the students hands to do anything in their hands to do well on the test/assignment, with no weight given to the process of learning. The curriculum is mapped down onto the student which can lead to a mismatch in: interest (the student may not be interested in the subject), and difficulty (the content may be too easy or too hard for the student). In some cases, this teaching structure can cause students to only learn the bare minimum, and memorise rather than deeply understand topics and build mastery. I argue that the main reason that this structure is most commonly used in schools is that its resource efficient. Meaning that the ratio between student learning over resources consumed is high, meaning that, an adequate amount of learning can be achieved by students without the school using many resources.

The other teaching method is called inquiry based learning (IBL), it is based on how people naturally learn. For example children try to atomise (take something complex and reduce it to its most simple parts) anything that they are curious about, they would ask their parent a string of questions about something, or experiment with a thing to better understand it, then apply that knowledge to their understanding and build upon it, this is synonymous to learning using inquiry. This form of learning gives the freedom to the student to map their learning up to the curriculum, which leads to higher quality learning, mastery, and retention of both information and skills. I would argue that inquiry based learning is objectively better than direct instruction because it promotes deep learning, competence, and process over strictly getting good grades. The potential downside to this approach is the expense, the ratio of teachers to students, equipment cost, and work space could all add to the cost to IBL.

Given the pros and cons of each teaching style, I propose that schools should implement aspects of IBL into their courses, or offer some fully IBL courses. Doing so would make students more valuable to the work force by enabling them to obtain mastery, competence, and experience in a specialised field of their interest. This would make it much easier for students to find a high-paying job soon after graduating high school.

« Older posts

© 2025 PM Blog

Theme by Anders NorenUp ↑