diff options
Diffstat (limited to 'content/articles')
| -rw-r--r-- | content/articles/_index.md | 7 | ||||
| -rw-r--r-- | content/articles/critical-cs.md | 55 | ||||
| -rw-r--r-- | content/articles/denounce-ai.md | 42 | ||||
| -rw-r--r-- | content/articles/dijkstra-knuth.md | 36 | ||||
| -rw-r--r-- | content/articles/good-cs-books.md | 20 | ||||
| -rw-r--r-- | content/articles/i-am-an-ai-denier.md | 45 | ||||
| -rw-r--r-- | content/articles/lecture-photos.md | 29 | ||||
| -rw-r--r-- | content/articles/mythical-man-month.md | 39 | ||||
| -rw-r--r-- | content/articles/nature-of-technology.md | 98 | ||||
| -rw-r--r-- | content/articles/poster-fair.md | 16 | ||||
| -rw-r--r-- | content/articles/useful-links.md | 32 |
11 files changed, 419 insertions, 0 deletions
diff --git a/content/articles/_index.md b/content/articles/_index.md new file mode 100644 index 0000000..100edaa --- /dev/null +++ b/content/articles/_index.md @@ -0,0 +1,7 @@ +--- +title: "Articles" +date: 2026-01-31T06:37:16+01:00 +draft: false +--- + +This is a list of all articles up to date. diff --git a/content/articles/critical-cs.md b/content/articles/critical-cs.md new file mode 100644 index 0000000..b402b0a --- /dev/null +++ b/content/articles/critical-cs.md @@ -0,0 +1,55 @@ +--- +title: "Comment for The New History of Modern Computing by P. Ceruzzi and T. Haigh" +date: 2026-01-31T06:37:16+01:00 +draft: false +summary: A short digression on mindlessly following trends. +--- + +Paul Ceruzzi and Thomas Haigh have taken the courtesy to tell the history of computing inside their brand new, 2021 book, albeit not without a twist. +Dedicating a whopping 545 pages of text to solely to computers, you would expect the authors to like the topic, but make no mistake -- neither is a technology sympathizer. +What might come off as even more unexpected, is that I agree with the points they make in their arguably the most negative, 15th chapter. + +It begins with the example of Elizabeth Holmes and the well-known Theranos fraud. +Having actually been fairly recently sentenced to 14 years of prison for her crimes, such an example is no way to optimistically end a book. +But it shortly highlights a recurring problem: _[...] rapid improvements in computer technology had not led to proportionally great social advances or economic developments._ +Ceruzzi and Haigh are entirely right. +On top of this, they provide several more examples, including, quote: _From 2008 and 2018, productivity growth in the US averaged just 1.3 percent a year, well below the 2.7 percent achieved from 2000 to 2007._ +While I disagree with the authors on the impact of the computer on society, their comments on negative consequences of the computer on our daily lives are straight on point. +_The typical American worker of 2018 was no better paid, after adjusting for inflation, than the typical American worker of the 70s._ +It's pretty hard to argue with such a statement. +In short, both authors conclude that technology does not benefit the ordinary man, but only the so-called "top 0.1\%", whose income after tax quintupled from 1980 to 2018. +Needless to say, the rest of the chapter is narrated in a similar tone. +But to more deeply reflect on its relevance to my own personal experiences, I'll try to connect it to the recent workshop I participated in at my university. + +Let me begin by emphasizing that the workshop was very well prepared. +The, more or less, 4 hours I spent working on the topics the presenters deemed important were well spent. +During the 2 presentations, both speakers knew what they were talking about, had a clear agenda for the meeting and engaged with the audience. +As such, I cannot say a negative word about the organizational aspect -- but from my personal standpoint of view, it is a completely different story. + +The workshop centered around UGC (User-Generated-Content). +This is a fancy word for creating short (30 seconds to 1 minute) videos with up-beat music and AI-generated captions meant to attract the short attention span of the average viewer (adequately deemed "the goldfish"), and promote a certain product. +The most common embodiment of this concept is a TikTok. + +Firstly, my attention was immediately grabbed to my classmates behaviour when the presenter played 3 popular UGC TikToks on the classroom projector. +With roughly 40 people in the classroom, including the staff members, the very moment the first video was played, every single person, with no exception averted their attention towards the projector. +It was like a scene from a sci-fi horror movie -- one moment everyone was talking over each other, and in the next all 40 heads in the classroom went quiet and synchronously turned towards the single big screen. +Nobody made a single sound for the terrifying 3 minutes it took to play those videos, which for me seemed like an eternity. +This event reminded me of George Orwell's 1984 and the recurring theme of screens in the book for propaganda purposes and the surveillance of book's population. +I think the parallel is hard to miss. + +Afterwards, we were given as a team a task to create such a video promoting a randomly selected chocolate protein bar. +Long story short, we were given roughly 45 minutes to do so, together with creating a storyboard, recording ourselves with the given product and editing the final result. +The outcome was a 1 minute TikTok promoting a protein bar not a single in person in our team knew even existed an hour ago. + +This is precisely why at this point it crossed the line for me and I refused to actively engage anymore in the workshop. +The techniques we were advised to use to improve our video exploited the low attention span of the viewers. +A "hook" -- that's how the first 5 second, attention-grabbing stop-motion of the TikTok was described to us. +As such, my team successfully hooked the audience (classmates) by video advertising a product which they had practically zero idea about. +To top it off, I took the courtesy to read its nutritional value afterwards, which for e.g., included palm oil, responsible for the hundreds of thousands of acres of deforestation in the Amazon. +With no concern or afterthought about the potential impact of our advertisement, my team mates submitted the video to a shared, available-for-all Google Drive. + +It seems to me society just unconditionally accepts the electronic technology it is handed. +It became widely accepted to trust technologies simply because they are hyped or popular. +Why have things turned out this way? Does it have to remain like this? +I really do not know. + diff --git a/content/articles/denounce-ai.md b/content/articles/denounce-ai.md new file mode 100644 index 0000000..2df9023 --- /dev/null +++ b/content/articles/denounce-ai.md @@ -0,0 +1,42 @@ ++++ +date = '2025-09-12T23:08:15+02:00' +draft = false +title = 'Why using AI to solve your homework is a bad idea' +summary = 'Using AI to solve your homework hurts both you and your colleagues. Here is why.' ++++ + +Recently I have read a blog post by [Jamie Zawinski](https://www.jwz.org/) on [Anthony Moser's opinion](https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html) about the current developments in AI. +Now I want to try to formulate my own arguments against the overwhelming reliance on AI nowadays. +It's been my point of view for a while, however I would like to now clearly state why I think the direction the technology world is heading is wrong. + +AI, although currently being hyped beyond reason, has been around since the previous century. +However, with the release of ChatGPT to the public, generative models have entered the lives of everyone. +As a Computer Science student I have first hand witnessed the effects of a paradigm shift in many domains, and after 2 years I believe that relying on content generated by artificial intelligence is simply harmful. + +As an avid fan of English literature I really like reading well-written books. +It is a great feeling to be able to appreciate the intricacies of the language and the craftsmanship of the author, who has taken the time (sometimes decades) to write about a certain topic. +If you read a lot, you can often tell a well-written book from a poorly constructed one, and if enough people realize this, the society awards great writers with prizes and honors. +However, with the rise of large language models, essays, books, novels and much more can be created with a single prompt to the model. +While the quality of such writing can often be questionable, it's important to realize that this takes away the very essence and purpose of writing in the first place. +When you put pen to paper you both try to advance your own thinking and convey your feelings and views to a broader audience. +It is your opinion and findings that matter, and this is by no means a trivial process. +Using artificial intelligence to write for you, or help you write, or correct your writing defeats the purpose of writing something in the first place. +This is also the right moment to point out the current concerns regarding this for the book authors and artist of any other kind as well. +AI is slowly getting better and better at this kind of work, rendering virtually impossible for me right now to distinguish e.g., electronic music generated by AI and created by humans. +This poses a threat to the literature and artistic community, and by proxy, to readers and everyone interested in art. +I consciously cannot use such technology knowing that it displaces the very people I admire the work of. + +What is even more interesting is that many large language models are trained on books, which are later completely discarded and thrown out. +Anthropic, the company behind the Claude AI model, has destroyed millions of print books to train their AI. +[Here](https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/) is a very good article about this. +In essence, to train the AI, one must scan the books first, preferably quickly. +According to Anthropic, the most efficient way to go about this is to strip the books of their cover, rip out the pages and scan just the printed paper. +This irreversibly destroys the books, which are later thrown out. +It's a good moment to ask oneself -- is this what I'd like to happen to my book, if I ever wrote one? +I will not raise the ethics concerns behind such actions, it's also not my aim to start a debate about this. +However, I think the question above is worth asking to yourself. + +I think the point made by [Hayao Miyazaki](https://en.wikipedia.org/wiki/Hayao_Miyazaki), the studio Ghibli founder behind some of the best animated movies of the last century summarizes it pretty well. +Recently there has been a viral video going on of him saying in 2016 how he believes AI to be "an insult to life itself". +As strong of an opinion as it is, I sympathize with his standpoint of view. +Being an artist and designer, seeing your life's work being completely overtaken by soulless software must be terrifying. diff --git a/content/articles/dijkstra-knuth.md b/content/articles/dijkstra-knuth.md new file mode 100644 index 0000000..bfa994c --- /dev/null +++ b/content/articles/dijkstra-knuth.md @@ -0,0 +1,36 @@ ++++ +date = '2025-12-14T17:14:30+01:00' +draft = false +title = 'On the old-school approach to programming' +summary ='After having read the EWD190 I decided to relate to it a bit.' ++++ + +It has long lingered on my mind to reflect partially on my experience of the last 3 years, as the B.Sc. of Computer Science I have recently undertaken is soon coming to an end. +Fortunately, this is not the end of my journey as a Computer Scientist, but there are specific things that I did not realize about Computer Science before I embarked on this endeavour, most important of which is this: Computer Science is 90% reading and understanding and 10% coding. +I believe it to be the most important thing I have learned about the field itself in the last 3 years. +Here is why. +Dealing with complex problems is hard. +Programming is all about solving complex problems, programmers live by optimizing our code the best we can, and try to find solutions to problems that we encounter while doing so. +While it is no doubt nice to have a working code that does something cool, or a solution to a problem that meets the specification, I don't think that is the mindset a programmer should have -- that is, at this stage, to solve a problem is not about getting to a solution somehow. + +Solving coding tasks requires time. +This might be difficult to admit for some, as it has been for me. +But understanding a problem requires patient reading and digesting the context, possible solutions and most importantly doubts one might have about their own solution. +Needless to say, if you have solved a problem without asking questions about it, then it wasn't a difficult (by proxy important) problem to be solve in the first place. +Reading code is hard. +It's sometimes like reading an essay in a foreign language. +Your head hurts, your eyes are getting sore, and after 6 hours of staring at the screen you conclude you don't understand anything anymore. +One of my favourite quotes about computing from Temple OS creator, [Terry Davis](https://en.wikipedia.org/wiki/Terry_A._Davis), reflects this perfectly _What’s reality? I don’t know. When my bird was looking at my computer monitor I thought, ‘That bird has no idea what he’s looking at.’ And yet what does the bird do? Does he panic? No, he can’t really panic, he just does the best he can. Is he able to live in a world where he’s so ignorant? Well, he doesn’t really have a choice. The bird is okay even though he doesn’t understand the world. You’re that bird looking at the monitor, and you’re thinking to yourself, ‘I can figure this out.’ Maybe you have some bird ideas. Maybe that’s the best you can do._ +It would almost seem like this time has been wasted, since you might have not produced a line of code. +Nevertheless, this is all there is to programming. + +After 3 years, it appears to me that my views about Computer Science aligns with those of Donald Knuth and Edsger Dijkstra the most. +I had first stumbled on Donald Knuth's blog long ago, while exploring Jamie Zawinski's blog and looking for top figures in CS to study. +On his [blog](https://www-cs-faculty.stanford.edu/~knuth/email.html) Knuth writes: _What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don't have time for such study._ +There it is. +Computing takes time. +There's no silver bullet yet, and we as programmers have to take our time to think about problems in depth. +There have been many comments on the peculiar style of teaching and way of being of Edsgar Dijkstra, but I believe he has made some really good points about this too. +What describes my experience over the last 3 years well is his quote: _The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility[...]._ +I think this the approach to take, because so often computers help us verify and point out that we indeed really don't know anything, we are just pretending we do. + diff --git a/content/articles/good-cs-books.md b/content/articles/good-cs-books.md new file mode 100644 index 0000000..96d7528 --- /dev/null +++ b/content/articles/good-cs-books.md @@ -0,0 +1,20 @@ ++++ +date = '2025-07-25T11:29:52+02:00' +draft = false +title = 'Computer Science books every student should read' +summary = ' ' ++++ + +1. [Frederick P. Brooks, _The Mythical Man-Month: Essays on Software Engineering_](/articles/mythical-man-month). + +2. Carl Hamacher and Zvonko Vranesic, _Computer Organization_. + +3. David A. Patterson and John L. Hennessy, _Computer Organization and Design: The Hardware/Software Interface_. + +4. Andrew Tanenbaum, David Wetherall, Nick Feamster, _Computer Networks_. + +5. Tanenbaum, A.S., Bos, H.J., _Modern Operating Systems_. + +6. Maurice Herlihy, Nir Shavit, Victor Luchangco, Michael Spear, _The Art of Multiprocessor Programming_. + +7. [W. Brian Arthur, _The Nature of Technology: What It Is and How It Evolves_.](/articles/nature-of-technology) diff --git a/content/articles/i-am-an-ai-denier.md b/content/articles/i-am-an-ai-denier.md new file mode 100644 index 0000000..4b5d847 --- /dev/null +++ b/content/articles/i-am-an-ai-denier.md @@ -0,0 +1,45 @@ +--- +title: "I Am An AI Denier" +date: 2026-02-20T09:02:36+01:00 +draft: false +--- + +This article is inspired by [this blogpost by Anthony Moser](https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html). +I cited it previously, but now seems the time to make a post akin to it. + +I am an AI denier. +Unlike Moser, I do not openly hate on AI, but I actively deny it from my life. +Disabling search engine query responses generated by AI, GitHub copilot in my IDE of choice or chatgpt.com is not enough for me. +My `/etc/hosts` file blacklists all the modern platforms that push AI slop, such as reddit.com, youtube.com, arstechnica.com and others. +Denying the obvious is simply not enough. + +But where do you get online news? +Don't you feel left out? +Aren't you at a disadvantage, since everyone is using AI now? + +Anthony Moser is entirely right. +It's embarrassing for me to hear this questions. +People who ask them should feel ashamed of themselves for sparing no critical thought on their own use of generated content. + +_To speak politely about AI, you put disclaimers before criticism: of course I’m not against it entirely; perhaps in a few years when; maybe for other purposes, but. You are supposed to debate how and when it should be used. You are supposed to take for granted that it must be useful somewhere, to someone, for something, eventually. People who are rich and smart and respected are saying so, and it would be arrogant to disagree with such people (Anthony Moser's blog)._ + +But as Moser is an AI hater, I am an AI denier. +And what surprises me the most is that people fail to notice the obvious, and are often surprised when I tell them I don't use it. +But here is a list of my reasons why. +The dangers of the [post-truth world](https://www.forbes.com/sites/iese/2021/06/19/a-post-truth-world-why-ronaldo-did-not-move-coca-cola-share-price/) with [AI generated slop on virtually every social media webpage](https://www.404media.co/pinterest-is-drowning-in-a-sea-of-ai-slop-and-auto-moderation/) and the [unconditional acceptance of its output](https://en.wikipedia.org/w/index.php?title=Chatbot_psychosis&oldid=1325425220) sometimes with [consequences absolutely not anticipated by the model designers](https://en.wikipedia.org/w/index.php?title=Murder_of_Suzanne_Adams&oldid=1326997937) are just simply ignored by the general public, which sometimes even unwittingly [treats AI on the same level as humans](https://en.wikipedia.org/wiki/ELIZA_effect). +Journals [talk about programmers who do not use AI](https://cacm.acm.org/blogcacm/the-last-solo-programmers/) as the modern day [Robert Neville](https://en.wikipedia.org/wiki/I_Am_Legend_(novel)) from the well-known Matheson novel. +The beloved services which used to provide genuine, funny content disappearing, the rising plausibility of The Dead Internet Theory and the instant content generation drive the internet community towards a clear [downfall](https://en.wikipedia.org/w/index.php?title=Enshittification&oldid=1338656342). +And it is apparently the AI deniers, that are wrong. +How can anyone stay blind to this? + +The Byte cover magazine from 1977 shows this perfectly. +The cover author originally intended to depict the Altair 8800 as a window to the utopian future. +Nowadays, in 2026, I think the image resounds differently. +It is the screens that keep us preoccupied, shielding from the real world, keeping us engaged and driving us to generate and consume content that is nothing but harmful, yet advertised as beneficial. +It is the biggest lie of the 21st century, right in front of us, yet largely unnoticed. + + + +While this post, a rant as you might call it, is certainly an attempt to frame my thoughts coherently, it is likely I will be updating it often. +My thoughts stand are they are written here. + diff --git a/content/articles/lecture-photos.md b/content/articles/lecture-photos.md new file mode 100644 index 0000000..84875f3 --- /dev/null +++ b/content/articles/lecture-photos.md @@ -0,0 +1,29 @@ +--- +title: "Computer Organization and Computer Networks" +date: 2025-10-09T05:43:30+01:00 +draft: false +summary: ' ' +--- + +Here are some cool photos from my first lectures. +(I indeed have the same shirt in all.) + +### Tutorial on I/O subsystem during Computer Organization 2024/2025: + + + + + + + + +### First introduction during Computer Organization 2024/2025: + + + + + + +### First introduction during Computer Networks 2024/2025: + + diff --git a/content/articles/mythical-man-month.md b/content/articles/mythical-man-month.md new file mode 100644 index 0000000..5e29614 --- /dev/null +++ b/content/articles/mythical-man-month.md @@ -0,0 +1,39 @@ +--- +title: "Review of The Mythical Man-Month by Frederick P. Brooks" +date: 2025-12-22T17:25:54+01:00 +draft: false +summary: '8/10 book, worth recommending' +--- + +"The Mythical Man-Month" by Frederick. P. Brooks is a book about his experience during development of OS/360. +It was recommended to me by my honors project supervisor, Prof. Alexandru, but even without his recommendation I would have likely stumbled upon this book. +Its contents are hailed as timelessly relevant and some of the most universal truths about working on coding projects are described inside. +While I admit I don't get all of the books many premises, some of them really speak to me. +Taking after the opening of the 18th chapter of the book: _For brevity is very good, whether we are, or are not understood_ I wil go through some of it's premises and try to relate them to my own experiences. + +Perhaps the most well-known theorem is Brook's Law: _Adding people to a late project will make it even later._ +This is exactly right -- it is indeed the communication overhead, the time needed for new members to comprehend the already existing codebase, and the difficulty of rejecting the new colleagues ideas on how to improve things that contribute to making the project later even more. +Compared to, for example, the construction industry this is a stunningly unexpected discovery. +It goes against everyone's best intuition, including mine. + +While on the topic of construction, Brooks admits in the 20th Anniversary Edition of the book (the one I read), that his approach -- build one to throw one away is now obsolete. +According to him, the right approach is to grow, not build software. +I also agree with this. +It is so much easier to design a system like this, rather than try to fit into one's mind everything beforehand, expecting all things to work if we just conceptually figure out everything first. +This is not to be mistaken with the fact that majority of programmer's work is inside their heads only -- this is the essence of programming which Brooks talks about. + +"No Silver Bullet" is one of the added chapters of the book, which was missing in the original edition. +It conveys a premise that I think is the most important thing that I have learned from this book. +That is -- we cannot change the essence of programming, and for foreseeable years and decades to come, the struggle of the ordinary programmer will always be figuring out solutions to a problem within one's mind. +No amount of abstraction, colorful IDEs and integrations, or pretty UI interfaces with hints and AI toolkits will overcome the fundamental truth about programming -- our work has no physical counterpart in the real world, just like mathematical equations do not. +A mathematician may solve the equation on paper, but it is the thinking inside one's head that produces the solution, unlike a painter who's art is immediately visible on the first brush stroke and is the goal and final product of the work. + +"The Mythical Man-Month" also gives me a clear goal to look forward to. +Brooks states: _Very good professional programmers are ten times as productive as poor ones, at the same training and experience level._ +Reading this reminded me of a friend of mine who also mentioned to me once that his goal is to become a 10X developer -- a person who is able to do the work of 10 ordinary programmers. +I think this is a grand goal to work towards, certainly I would feel happy achieving. + +There are multiple concepts that arise from the book which I have not mentioned. +It was a really tough read for me personally, as it's a book that reads like a very old article. +Many concepts I had to look up, and to relate to some of the chapters has been really difficult. +So, I expect I'll be updating this blog post often while coming back to the book. diff --git a/content/articles/nature-of-technology.md b/content/articles/nature-of-technology.md new file mode 100644 index 0000000..587e846 --- /dev/null +++ b/content/articles/nature-of-technology.md @@ -0,0 +1,98 @@ +--- +title: "Review of The Nature of Technology by W. Brian Arthur" +date: 2026-02-11T08:26:48+01:00 +draft: false +summary: '"We need challenge, we need meaning, we need purpose, we need alignment with nature. Where technology separates us from these it brings a type of death."' +--- + +"The Nature of Technology" explains how the inventions of modern science come to being. +This book, published in 2009, was recommended to me by my research project supervisor, Prof. Alexandru. +Inside, W. Arthur presents from a new perspective how technologies evolve, drawing a parallel between scientific advancements and a Darwinian-like theory of evolution. +The book is written fairly well, nonetheless there are points I would like to reflect upon. + +_One last disclaimer: Because I write a book on technology the reader should not take it that I am particularly in favor of technology. +Oncologists may write about cancer, but that does not mean they wish it upon people. +I am skeptical about technology and about its consequences._ + +Already in the preface of the book, the author makes a statement that I particularly like, as it conforms to my own beliefs and at the same time reassures me that it is not paradoxical to be a Computer Scientist and a technology sceptic at the same time. +Having read this, I breathe a sigh of relief, as this means I can continue in the direction of my discourse, without having to worry that I might be hypocritical. + +_I have many attitudes towards technology. +I use it and take it for granted. +I enjoy it and occasionally am frustrated by it. +And I am vaguely suspicious of what it is doing to our lives._ + +To elaborate further on this quote, on the next page the author enters a discourse as to the unease that technology brings to our daily lives. +Arthur makes a really good point that human roots go back one way - to nature. +As such, the more our modern world deviates from the familiar, natural environment, the more we question the technology that causes this shift. + +_Our deepest hopes as humans lie in technology; but our deepest trust lies in nature._ + +That is not to say that we should go out and live in the woods instead of cities. +Nonetheless, these first 3 quotes play well into why we should be sceptical of technology. +We hope for it to solve our problems, and with this hope come expectations and unconditional acceptance of solutions to modern issues that technologies provide. +However, this does not mean its correct to do so. +I must admit in the recent times I noticed that less and less people, myself included, separate nature from technology. +Since I was born (2004), I was surrounded by innovations such as cars, cellphones, computers etc. +As a 12 year old, I never felt "uneasy" about using a computer or a tablet. +You can almost argue it was natural to me. +It was only by becoming a Computer Science student that I was able to become aware of technology as separate from natural order of life. +These days: + +_Technology is a Thing directing human life, a Thing to which human life must bow and adapt._ +We accept technology without critical thought, like the one of W. Arthur. +Back in the days of Arthur, people still trusted in nature, not technology. +I would argue these days, it is no longer true. + +_And so, the story of this century will be about the clash between what technology offers and what we feel comfortable with._ + +I disagree. +I think the clash that Arthur predicts will never come. +What we should be comfortable with will be imposed upon us, with little choice for the individual. +Even these days, humans are more at easy with their phones constantly with them, then alone with their thoughts by themselves. + +Reading further, Arthur elaborates on why the book is needed - that the pure Darwinian model of evolution does not fit technology. +He puts forward the premise of the entire book: + +_[...] the novel technologies arise by combination of existing technologies and that therefore existing technologies beget further technologies._ + +This thought somewhat makes sense to me, but what is unacceptable from my point of view are the lines that follow roughly 5 sentences afterwards: + +_We can say that technology creates itself out of itself._ + +I understand what Arthur means here. +That all technologies have a common root, and there is a causal relationship between them. +However the formulation of this sentence is wrong according to me. +Technologies do not create themselves. +We make them into what they are, and it is us who can decide whether to put the new innovation forward or not. +Ethics forbid genetic engineering on humans, so we collectively are capable of stopping the march of technology for at least some innovations. +These statements of course can be challenged further, but for now this is the way I think. +Should new observations arise, I might change my mind. + +Further chapters of the book go more in-depth into the structure of technologies. +Arthur puts forward three different ways to define what a technology is and sketches an abstract view of its inside. +Here I can draw parallels between the concepts I was introduced with during programming classes. +Ideas like abstraction, encapsulation, modularity and compartmentalization were familiar to me already, so I was surprised to see how generic they are an that they appear in all technologies around us, regardless of their domain. + +Talking about the structure of standalone inventions: +_Each is an arrangement of connected building blocks that consists of a central assembly that carries out a base principle, along with other assemblies or component systems that interact to support this._ + +This comes back to the Tanenbaum vs. Torvalds debate about monolithic kernel design vs. micro-kernel design. +In the end, indeed Torvalds won, since Linux is now the most popular operating system in the world. +However the above quote begs the question: Did he ever stand a chance to win in the first place? +If the structure of invention is a wide body and smaller peripherals does this mean that all the inventions that do not follow this principle are bound to fail? +I might be misunderstanding the point Arthur makes here, you could also argue that a micro-kernel still includes a kernel, but I think it is worthwhile to reflect upon this, and whether or not all designs (should) follow this principle. + +W. Brian Arthur summarizes: +_[...] all technologies are combinations of elements; that these elements themselves are technologies; and that all technologies use phenomena to some purpose_ +This neatly sums up the entire book and his theory. +Unlike the Darwinian one, where natural selection dictates which species die and which live, the evolution of technology takes place through different combinations of elements that cater to (relatively) current human needs. +A technology always fulfills a human purpose, and the way it does so is through combinations of other technologies which in return use raw natural phenomena at their most basic level. + +_Technologies are acquiring properties we associate with living organisms. [...] We fear technology as a living thing that will bring us death. +Not the death of nothingness, but a worse death. +The death that comes with no-freedom. +The death of will._ + +The only antidote to this, is to take the reins of technology ourselves, as individuals, and harness it consciously and responsibly. +It is not to forget, that it is the individual that makes the decisions about which technology we use, or not. diff --git a/content/articles/poster-fair.md b/content/articles/poster-fair.md new file mode 100644 index 0000000..e38db39 --- /dev/null +++ b/content/articles/poster-fair.md @@ -0,0 +1,16 @@ +--- +title: "HP Poster Fair 2024/2025" +date: 2026-02-08T15:38:10+01:00 +draft: false +summary: This is a short upload of the photos so that they do not dissapear into oblivion. +--- + +Last year we in June I presented my poster during the annual HP Poster Fair at the VU. +Here are the photos and my poster: + + + + + + + diff --git a/content/articles/useful-links.md b/content/articles/useful-links.md new file mode 100644 index 0000000..1aad127 --- /dev/null +++ b/content/articles/useful-links.md @@ -0,0 +1,32 @@ ++++ +date = '2025-07-26T12:53:30+02:00' +draft = false +title = 'Useful links' +summary = ' ' ++++ + +1. [jwz.org](https://www.jwz.org) + +2. [denshi.org](https://denshi.org) + +3. [landchad.net](https://landchad.net) + +4. [comfy.guide](https://comfy.guide) + +5. [pad.envs.net](https://pad.envs.net/) + +6. [envs.net](https://envs.net/) + +7. [blog.orhun.dev](https://blog.orhun.dev/no-bullshit-file-hosting/) + +8. [cs.stanford.edu/~knuth](https://cs.stanford.edu/~knuth/index.html) + +9. [conventionalcommits.org](https://www.conventionalcommits.org/) + +10. [unixdigest.com](https://www.unixdigest.com) + +11. [stallman.org](https://stallman.org/) + +12. [vintageapple.org](https://vintageapple.org/byte/) + +13. [www.baldurbjarnason.com](https://www.baldurbjarnason.com) |
