https://news.ycombinator.com/item?id=42157558
Is there a reason why the link goes to the discussion at the bottom of that page rather than the beginning?
Could this be folded into the other discussion? (I don't see that the link has been posted there yet)
What if your text editing and presentation experience was slow and laggy? That’s Notion.
Some orgs I've worked for were very "wiki" driven - there's a big expectation of using Confluence or Notion to navigate documentation. This applies both big (5000+) and small (50+) organizations for me.
Other organizations I've worked in were very document centric - so you organize things in folders, link between documents (GDoc @SomeDocument or MSFT's equivalent). Those organizations tend to pass around links to documents or "index" documents. Similarly, this applies for both big and small organizations in my experience.
Of the two, I tend to prefer the latter. Without dedicated editors, the wiki version seems to decay rapidly, especially once the org grows above some size.
Knowledge management is hard...
Edit: also, the pop-up menu on the right side that completely breaks your scrollbar. Putting that UI/UX degree to use.
Well then that’s a relief.
These are some of the biggest weasel words of IT. Every one of them has an implicit nature of a comparison word and yet the comparison or any sort of hard metrics are always completely absent in their use.
Infinite configurability means infinite validation time.
Which, based on what I see in the rendered archive.is version, is being used to do nothing outside of the normal use of a standard Markdown-based SSG like Nikola or Jekyll.
Not that doing more would be a good idea anyway.
Archive seems to "bake" JS sites to plain HTML.
I guess that just goes to show that the author’s mind was, in fact, blown.
Why did someone think it was a good idea to switch to JavaScript?
I think the person who'll get value out of SICP will not have any problem picking up scheme syntax on the fly.
If you study any other related field like math or physics you become accustomed to learning a formal system for the context of a particular problem.
CS students tend to have this weird careerist view where every page just directly help them get a job.
Schools are so desperate to keep up enrollment numbers today that many have capitulated and are giving students what they want instead of what the faculty thinks they need.
If all someone wants is the practical benefits of programming and has no interest in the underlying theory, they shouldn't waste their their time and money on a CS degree. All the practical information is available for free or at very low cost.
Call me a hopeless optimist, but I think there's a better way out there.
Universities should start their own AI-tutor development programs, in co-operation with others because, only way AI-tutors can become better is by practice practive practice.
So I'n not sure if this is a new viewpoint or not, but it is not only students that need training, it is also teachers who need to be trained more in teaching. AI is all about "training", understanding is about training. Training is the new paradigm for me.
Is that why they are so bad at adapting to foreign languages and frameworks? Maybe they should go back to the basics.
Somewhat understandable considering that student loans put you into indentured servitude unless you have rich parents. Although I still think they're shortsighted. A good CS graduate should understand that programming languages are just syntactic sugar over the underlying concepts and have little trouble translating/picking up the basics of new languages.
A more fair comparison is engineering or applied math major, not pure math at MIT.
Engineers rarely do laplace transforms by hand either.
The book is written for 1st year stem undergrads at MIT. So maybe 2nd or 3rd year at state school.
When I did my undergrad CS degree, the fact that scheme was so heavily used was a common complaint they received from students. It just wasn't a marketable skill.
But even if that were true and you did take 20+ classes in Scheme, you're still a college educated computer scientist. You can't pick up JavaScript or Python in time for a job interview for an entry level job? They're easy languages to learn. If you survived four years of exclusively being taught with Scheme, they'd be a breeze to pick up.
They wanted a trade school/practical education in something immediately marketable, not a theoretical education.
The reason I remember this is that in my "exit interview" as a senior I mentioned that I appreciated the exposure to these languages and theory and my advisor remarked "we don't hear that very often, the usual feedback is that we don't teach the languages employers want"
I do feel like the value of using Scheme is teaching students early on that syntax doesn't really matter. Those that are actually interested in CS theory will find this enlightening, those that are simply in it because investment banking is so 2007 will churn out.
Nor is there any good reason to filter people out preemptively. If seeing `foo(x)` instead of `(foo x)` makes the student more receptive to a proper understanding of recursion, that's just fine.
The encodings can be a bit confusing, but really elegant and tiny at the same time. Take for example a functional implementation of the Maybe monad in javascript:
Nothing = nothing => just => nothing
Just = v => nothing => just => just(v)
pure = Just
bind = mx => f => mx(mx)(f)
evalMaybe = maybe => maybe("Nothing")(v => "Just " + v)
console.log(evalMaybe(bind(Nothing)(n => pure(n + 1)))) // Nothing
console.log(evalMaybe(bind(Just(42))(n => pure(n + 1)))) // Just 43
I think that writing such code, if only for educational purposes, can be really helpful in actually understanding how the state "flows" during the monadic bind/return. Typical monad instantiations of Maybe do not give such deep insight (at least to me).
> Just because you can do a thing doesn’t mean you should.
Of course you should, where would be the fun in that?
Higher mathematics in a nutshell.
>Of course you should, where would be the fun in that?
Also higher mathematics in a nutshell.
Narrator asks: Who should we put in charge of <<thing that will effect people in a tangible way>>?
Not the mathematicians! echo the crowd in unanmity.
Narrator asks: Who will we delegate the task of <<abuse of notation>> to?
The crowd grumbles, arguing amongst themselves whether such a question even warrants an answer. A mathematician stands up, proclaiming "We'll take it!", following up with, "Once you understand the notation involved in my previous statement, you will understand why this outcome is inevitable."
The crowd, seeing the wisdom of not even embarking on that tribulation, assents to the delegation, given the task of undoing the abuse of notation for the legibility of the layperson is also delegated to the aspiring mathematician.
Scene opens on current day...
Are they? But in the Nothing you have 2 identical members (`nothing' without arguments), won't that throw an exception?
To borrow Rust syntax (pun intended):
enum Nothing {
nothing,
just
nothing
};
That's just weird. enum Maybe<T> {
Nothing,
Just(T),
}
i agree that the cognitive load in a language like js which is not prepared to accommodate this paradigm is not worth it
even when deciding to use Haskell we need to weigh the pros and cons wrt the project's goals
Even in Lean, a dependently typed language where recursors can be made explicit, people prefer using pattern matching instead of them. There is even sugar for transforming some recursors-like functions into pattern matching like syntax. FYI in Lean recursors are marked as non-computable due to performance concerns, so you can use them to write proofs but not programs.
Seen from yet another point of view, this is transforming inductive types in a function corresponding to a visitor. And yet functional programming folks spent years trying to convince people to replace visitors with proper inductive/algebraic data types and pattern matching, so this idea is a step backwards even for them.
data Maybe a = Nothing | Just a
foldMaybe :: (Unit -> r) -> (a -> r) -> Maybe a -> r
The two higher order functions passed into `foldMaybe` are your `Nothing` and `Just` (modulo I added the Unit param to the Nothing case to be a little more precise).I'd iterate on that and say: everything is just languages and dialogues, with functions being one component of them. Over time, we’ve evolved from machine languages to higher-level ones, but most popular languages today still focus on the "how" rather than the "what".
Programming paradigms, even those like functional and logic programming, requires the "how". My rant is this: the next major iteration(s) in programming languages should shift focus to the "what". By abstracting away the "how", we can reach a higher-order approach that emphasizes intent and outcomes over implementation details.
I don't want to constrain this idea to Z3, LLMs, or low/no-code platforms, but rather to emphasize the spirit of the "what". It’s about enabling a mindset and tools that prioritize defining the goal, not the mechanics.
I know this contradicts our work as software engineers where we thrive on the "how", but maybe that’s the point. By letting go of some of the control and complexity, we might unlock entirely new ways to build systems and solve problems.
If I should be plain realistic, I'd say that in the middle, we need to evolve by mixing both worlds while keeping our eyes on a new horizon.
SQL is an example of a language that is at least somewhat like that.
SELECT foo WHERE bar = baz
Doesn't really say "how" to do that, it only defines what you want.You aren't telling the database how to get those results from the files on the disk. You are telling it what values you want, matching what conditions, and (in the case of joins) what related data you want. If you want an aggregation grouped by some criteria you say what values you want summed (or averaged, etc.) and what the grouping criteria are, but not how to do it.
Not a perfect example and it breaks entirely if you get into stuff like looping over a cursor but it is why SQL is usually called a declarative language.
In elixir you can pop off as many as you like.
http://www.skuunk.com/2020/01/elixir-destructuring-function....
Which can let you unroll function preambles, or apply different rules if for instance an admin user runs a function versus a regular user.
A closure with no behaviour is just a pointer to the enclosed variable. A closure with 2 pointers is a pair, which you can get the car and cdr.
The runtime needs to make the pointee available outside its definition, so escape analysis, garbage collection, etc. But no dictionary is needed.
that said, since I've been reading about kanren and prolog I'm about to say "everything is a relation" :)
I got to the same conclusion a while ago, except that I found that it's lambdas all the way down.
https://www.cambridge.org/core/books/all-the-math-you-missed...
Did you read the book in isolation or was it a part of a class / MOOC ?
Nice memories.
I fell in love with scheme eventually as it was such a simple syntax. Getting used to parentheses did take some time though.
But for real world programming, the tedious ones is related to validation, parsing and other business logic.
So i prefer a book to help teach CS by using real world codebase to solve real world everyday problem as a software engineer instead.
You can have your cake and eat it.
It's practical and productive and profitable, which is great, but not really the original goal.
Why would you validate if you can parse? If you have a decent chunk of experience in implementing business logic then you know that your quality of life will be destroyed by switches and other inscrutable wormhole techniques up until the point where you learn to use and build around rule engines. SICP shows you how you can tailor your own rule engine, so you won't have to get the gorilla and the jungle when you reach for one in an enterprisey library.
both can be a foundation for mathematics, and hence, a foundation for everything
what's interesting is how each choice affects what logic even means?
How could we go the other way? A set can be "defined" by the predicate that tests membership, but then how do we model the predicates? Some formalism like the lambda calculus?
A category can be defined in terms of its morphisms without mentioning objects and a topos has predicates as morphisms into the subobject classifier.
lambda calculus would provide a computational way to determine the truth value of the predicate, any computable predicate that is.
If computers and their processors are a useful abstraction, why don't we write everything directly in machine language - or microcode for that matter?
This is more about computing than about computers. As Dijkstra put it, "Computer science is no more about computers than astronomy is about telescopes."
Computing involves languages, including many languages that are not machine languages. Every language that's higher level than machine code requires translation to actually execute on the particular machines that we've developed as a result of our history and legacy decisions.
The lambda calculus is a prototypical language that provides very simple yet general meanings for the very concept of variables - or name-based abstraction in general - and the closely related concept of functions. It's a powerful set of concepts that is the basis for many very powerful languages.
It also provides a mathematically tractable way to represent languages that don't follow those principles closely. Compilers perform optimizations like static single assignment (SSA), which are fundamentally equivalent to a subset of the functional concept of continuation passing style (CPS). In other words, mainstream languages need to be transformed through functional style in order to make them tractable enough to compile.
The mapping from a lambda calculus style program to a CPU-style register machine is quite straightforward. The connection is covered in depth in Chapter 5 of SICP, "Computing with Register Machines." Later work on this found even better ways to handle this, like Appel's "Compiling with Continuations" - which led to the SSA/CPS equivalence mentioned above.
There's a lot to learn here. It's hard to recognize that if you know nothing about it, though.
How do you represent an irregular float?
What will make any function that uses floating point numbers mindblowing complex. But there's probably an easier way by creating some transformation from (Integer -> a) to (F64 -> a) so that only the transformation gets complex.
Anyway, there are many reasons people don't write actual programs this way.
("+", ("fib", ("-", "n", 2)), ("fib", ("-", "n", 1))),
The two calls to `fib` are surely meant to be `fibonacci` since the latter is defined, but not the former. Indeed, the code is correct in the github repo:https://github.com/savarin/pyscheme/blob/0f47292c8e5112425b5...
- Everything is just a function (SICP)
- Everything is just an object (Smalltalk, and to some extent Java)
- Everything is just a closure (the original Common LISP object system)
- Everything is just a file of bytes (UNIX)
- Everything is just a database (IBM System/38, Tandem)
Everything is just a Turing machine. Everything is just a function. Everything is the Conway’s game of life.
The fact that all of these forms are equally expressive is quite a surprise when you first discover this. Importantly, it doesn’t mean that any one set of axioms is “more correct” than the other. They’re equally expressive.
That one ends in a tarpit where everything is possible but nothing of interest is easy.
Real development IMX is not much different. People just have low standards for "interesting" nowadays, and also have vastly increased access to previous solutions for increasingly difficult problems. But while modern programming languages might be more pleasant to use in many ways, they have relatively little to do with the combined overall progress developers have made. Increased access to "compute" (as they say nowadays), effort put into planning and design, and the simple passage of time are all far more important factors in explaining where we are now IMO.
It's like being mad that hammer was so successful we invented screw to improve on it's greatest hits.
- Everything is just a buffer (K&R C and many of its descendants)
- Everything is just a logical assertion (Prolog)
I look at the list and I see a bunch of successes, though some of them are niche.
By making the model follow some simple rules which we think the real thing follows as well we can reason about what happens when some inputs to the real thing being modeled change, by runnign our model (-simulation).
Thus you could add to your list: "Everything is just a simulation".
Except the real thing of course :-)
* modulo infinity
** except a small number of languages that are not
What doesn't fit into this particular "everything?"
- functions that don't fit in cache, RAM, disk, etc.
- functions that have explosive big-O, including N way JOINs, search/matching, etc.
- functions with side effects, including non-idempotent. Nobody thinks about side channel attacks on functions.
- non-deterministic functions, including ones that depend on date, time, duration, etc.
- functions don't fail midway, let alone gracefully.
- functions don't consume resources that affect other (cough) functions that happen to be sharing a pool of resources
- function arguments can be arbitrarily large or complex - IRL, there are limits and then you need pointers and then you need remote references to the web, disk, etc.
(tell me when to stop - I can keep going!)
One of the thinga I miss from my C++ days was the ability to mark functions as const, which made them fairly pure.
Being able to clearly mark which is which, but also to combine them easily, was very productive.
I threw up some content up here: https://intellec7.notion.site/Drinking-SICP-hatorade-and-why... , along with an unrelated criticism of SICP.
I'd like to better understand what the limitations are of "everything is just a function".
It seems correct to me that you can't directly prove inequality between Church numerals without starting with some other fact about inequality. Whereas with inductive data types, a proof system can directly "observe" the equality or inequality of two concrete instances of the same inductive type, by recursively removing the outermost constructor application from each instance.
The relevant part is that this is basically how “software engineers continual education” is going to look like
That's a fun statement.
There are externalities like networking and storage, but still data transformation in a way.
If this number, jump to numberA otherwise jump to this numberB. Also if numberC store numberD at numberE. ;)
And of course the order of the material could be debated and rearranged countless ways. One of my future planned projects is to do my own video series presenting the material according to my own sensibilities.
It's nice to hear that the course apparently still stays true to its roots while using more current languages like Python. Python is designed as a pragmatic, multi-paradigm language and I think people often don't give it enough credit for its expressive power using FP idioms (if not with complete purity).
Python has very poor support for functional programming. Lists are not cons based, lambdas are crippled, pattern matching is horrible and not even expression based, namespaces are weird.
Python is not even a current language, it is stuck in the 1990s and happens to have a decent C-API that unfortunately fueled its growth at the expense of better languages.
Why would a decent C-API fuel its growth? Also can you give me some examples of better languages?
Am no senior developer but I find python very elegant and easy to get started with.
What drew me to Python back in 2006 as a CS student who knew C and Java was its feeling like executable pseudocode compared to languages that required more “boilerplate.” Python’s more expressive syntax, combined with its extensive “batteries included” standard library, meant I could get more done in less time. Thus, for a time in my career Python was my go-to language for short- and medium-sized programs. To this day I often write pseudocode in a Python-like syntax.
Since then I have discovered functional programming languages. I’m more likely to grab something like Common Lisp or Haskell these days; I find Lisps to be more expressive and more flexible than Python, and I also find static typing to be very helpful in larger programs. But I think Python is still a good choice for small- and medium-sized programs.
This changed with heavy use, of course. Such that now packaging is a main reason to hate python. Comically so.
It is only in the later versions where they have pushed compatibility boundaries that this has gotten obnoxious.
If you're used to Scheme, Common Lisp, or Haskell, Python's arbitrary decisions about e.g. lambda or how scopes work may be grating. But Python is the BASIC of the modern day, and people laughed at BASIC in the 80s too... except businesses ran on BASIC code and fortunes had been made from it.
It's still not great for functional programming, but far, far better than it used to be.
If you avoided the many footguns the language offered, you could actually write pretty clean functional code. Of course you had to be extremely diligent with your testing because the interpreter did not provide much help even with warnings and strict enabled.
While I do find it annoying that python used 'list' to mean 'dynamic array', it is a lot better than a ton of church encoding in the other common teaching language, Java.
Linked lists may not be native in python but it is trivial to implement them.
Anyway, Python is intentionally not functional because Guido dislikes functional programming
https://en.m.wikipedia.org/wiki/ArsDigita#ArsDigita_Foundati...
https://archive.org/details/arsdigita_01_sicp/
They were selling USB keys with the entire curriculum, if someone could upload an iso, that would be amazing. https://web.archive.org/web/20190222145553/aduni.org/drives/
It seems rather silly to force SICP into Python.
This version is one of my all-time favorite StackOverflow answers: https://stackoverflow.com/questions/13591970/does-python-opt...
Example: After Lisp, you might replace every for loop with forEach or chain everything through map/reduce. But unless you’re working in a language that fully embraces functional programming, this approach can hurt both readability and performance.
At the end of the day, it’s grounding to remember there’s mutable memory and a CPU processing the code. These days, I find data-oriented design and “mechanical sympathy” (aligning code with hardware realities) more practical day-to-day than more abstract concepts like Church numerals.
I'm thinking about learning elixir but lack of types is kind of a turn off for me.
Here's a summary of the type system they're exploring for Elixir: https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...
Not GP but I'm using Clojure for both the front-end (ClojureScript) and the "back-end" (server running Clojure), sharing Clojure code between the two.
Clojure is not typed but I use Clojure specs. It's not a type system but it's really nice. You can spec everything, including specc'ing functions: for example you can do stuff like: "while in dev, verify that this function returns indeed a collection that is actually sorted everytime it is called". I'm not saying: "no types + clojure specs" beats types but it exists and it helps to solve some of the things types are useful for.
This sounds interesting. Do I understand correctly, this checks the "spec" at runtime? What happens if a spec fails?
Generally I’d say Elixir’s lack of “hard” static typing is more than made up for what you get from the BEAM VM, OTP, its concurrency model, supervisors etc.
That said if you’re interested in leveraging the platform whilst also programming with types I’d recommend checking out Gleam (https://gleam.run), which I believe uses an HM type system.
ECS is all about composition rather than inheritance, and decoupling logic from data. It is not strictly immutable for performance reasons, but it has a similar character as the immutable functional state-management frameworks in WebDev (Redux, Elm and co).
It's not just about maintainability, it actually can be awkward to fit certain patterns into ECS, but it has significant advantages in terms of performance (particularly being CPU cache-friendly) and being able to massively parallelize safely and without having to think too much about it. It can also be a helpful abstraction for distributed computing and networking in multiplayer.
And yet goto is even less popular than Lisp despite being the only way that CPUs implement control flow.
What's even more bizarre is that despite caches being the only way to get any performance out of modern CPUs we still don't have a language that treats the memory hierarchy as a first class citizen. The closest is Linux flavored c with a sea of underscores.
I confess it has made transcribing some algorithms a bit easier.
Lest we forget that modern x86 machine code is an assembly language and the machine code, or microinstructions in newspeak, is completely hidden from view.
Besides, stuff like forEach, map/reduce was already present in Smalltalk collections, and was copied into Object Pascal, C++, even if without first class grammar for it.
Exactly because the underlying memory is mutual, there are mechanisms in ML languages to mutation, when really needed.
As I've learned more and studied more math though, I've come to the conclusion that the relation really is the more fundamental primitive. Every function can be represented as a kind of restricted relation, but the converse is not true, at least not without adding considerable extra gadgetry.
While of course relational databases and SQL are the best known examples of relational programming, and are highly successful, I still believe it's a largely untapped space.
However, my interest currently is less in the space of programming language design and more in teaching very young children the basics of math. And for whatever reason it's considerably easier to teach a predicate like "is big" as a 1-ary relation and "is bigger than" as a 2-ary relation than trying to capture the same concepts as functions.
That being said... SICP, Compilers, and RAFT all left me with the gnawing feeling that there was more juice to be squeezed from computer science than I was able to understand.
Why were my parsers and compilers not bidirectional? Why am I writing a utilitarian OO parser for an elegant FP language? Why is there no runtime polymorphic isomorphism for my client/server RAFT nodes?[1]
Drowning in my own sorrows, I began binge drinking various languages, drifting from one syntax to the next. CL, Scheme, Java, C#, Haskell, Forth, TypeScript, Clojure... ahh, long love affair with Clojure. But in the end, even Clojure, for all of its elegance, could not solve the original problem I was facing and at the end of the day in many ways was just "a better Python".
I was nearly ready to call it quits and write my own language when all the sudden I discovered... Prolog.
Not just any Prolog-- Scryer Prolog. Built on rust. Uncompromising purity. And Markus Triska's Power of Prolog videos on YouTube.[2]
My God. They should have sent a poet.
Bidirectional parsing? No-- N-dimensional parsing vis definite clause grammars, ships with standard library. First class integer constraint solvers. And it is the first language I have ever worked with that takes the notion of "code is data" seriously.
Scryer is still a young language, but it's going big places. Now is a great time to get involved while the language and community is still small.
I will end this love letter by saying that I owe my career and much of my passion to Dave, and because of him Python is still how I earn my bread and afford expensive toys for my Yorkie.
You rock, Dave!
[*]: this was a typo, should be Advanced Python Mastery, but in my ways advanced python salary is actually more accurate, in my case anyway.
[1]: If issues like this don't bother you, and you haven't been coding for at least 10-15 years, maybe come back later. Learning Prolog and Scryer in particular is really hard at first, even harder than Haskell/Clojure.
At first I did not expect that I would find Prolog to be useful for general purpose computing –– it turns out Prolog has some surprising properties that I wish were present in other languages such as Clojure.
Arguments passed to Prolog "predicates" (unit of work similar to functions) as arguments are NOT evaluated UNLESS you use "call" (similar to eval) on them directly. That may seem crazy at first but the implication is that it completely erases the line between code and data. Yes, lisp has macros, but there is a heavy cultural and technical line between a function and a macro. In Prolog, there is no distinction and no stigma around seamless metaprogramming.
Regarding the library story, it depends on how you look at it. Very soon Scryer Prolog will finish being embeddable in other languages so in a sense it had access to every library from every language. But from a vanilla perspective, it has incredibly powerful first class constraint solvers, which I've never seen in any other language that is also a general purpose language.