I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you
@goldnutter412
3 ай бұрын
He's great ! dad was a chip designer.. go figure :) amazing backlog of content sir Especially chips..
@chinesesparrows
3 ай бұрын
The span and depth of topics covered with a eye on technical details is truly awesome and rare. Smart commenters point out the occasional inaccuracies (understandable for the span of topics) which benefits everyone as well.
@WyomingGuy876
3 ай бұрын
Dude, try living through all of this.
@PhilippBlum
3 ай бұрын
He has a team? I assumed he is just grinding and great at this.
@fintech1378
3 ай бұрын
He is independent AI researcher
@strayling1
3 ай бұрын
Please continue the story. A cliffhanger like that deserves a sequel! Seriously, this was a truly impressive video and I learned new things from it.
@rotors_taker_0h
3 ай бұрын
In the 80's Hinton, LeCun, Schmidhuber and others invented backpropagation, CNNs (convolutional NNs), then RNNs, LSTMs in the 90's but it was still very niche area of study with "limited potential" because NNs always performed a bit worse than other methods, until couple breakthroughs in speech recognition and image classification in the end of 00's. 2012 AlexNet brought instant hype to CNNs, which was followed by one-liners critically improving quality and stability of training: better initial values, sigmoid -> relu, dropout, normalization (forcing values to be in certain range), resnet (just adding values of previous layer to the next one). That allowed to train models so much bigger and deeper that they started to dominate everything else by sheer size. Then came Transformer in 2017 that allowed to treat basically any input as a sequence of tokens and scaling hypothesis which brought us to the present time with "small NNs" being "just" several billion parameters. Between 2012 and now also been extreme progress with hardware for running these networks, optimizing the precision (it turned out that you don't need 32bit float numbers to train/use NNs, lower possible is 1 bit, good amount is 4 bit integer which is 100x faster in hardware), new instructions, matmuls, sparsity, tensor cores and systolic arrays and what not to get truly insane speedups. For comparison, AlexNet was trained on 2 GTX580 so it was about 2.5TFLOPs of compute. This year we have ultrathin and light laptops with 120TOPs and server cards with 20000TOPs and biggest clusters are in the range of 100 000 such cards, so in total 1 billion times more compute thrown at the problem than 12 years ago. And 12 years ago it was 1000x more than at the start of the century, so, we got about a trillion times more compute to make neural networks work and we still not anywhere close to be done. Of course, early pioneers had no chance without that much compute.
@honor9lite1337
3 ай бұрын
2nd that 😊
@thomassynths
3 ай бұрын
"The Second Neural Networks"
@soanywaysillstartedblastin2797
3 ай бұрын
Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks
@PeteC62
3 ай бұрын
Your videos sre always well worth the time to watch them, thanks!
@MFMegaZeroX7
3 ай бұрын
I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.
@honor9lite1337
3 ай бұрын
The Society of Mind.
@hififlipper
3 ай бұрын
"A human being without life" hurts too much.
@dahahaka
3 ай бұрын
Avg person in 2024
@dwinsemius
3 ай бұрын
The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".
@honor9lite1337
3 ай бұрын
Your storyline is decades long, so how old are you? 😮
@dwinsemius
3 ай бұрын
@@honor9lite1337 7.5 decades
@tracyrreed
3 ай бұрын
5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂
@PeteC62
3 ай бұрын
It's nothing new. Ton of people do that.
@theconkernator
3 ай бұрын
Its not Isaac Newton if thats what you were thinking. It's Russell and Whitehead.
@PeteC62
3 ай бұрын
Well that's no good. I can't think of a terrible pun on their names!
@dimBulb5
3 ай бұрын
@@theconkernator Thanks! I was definitely thinking Newton.
@honor9lite1337
3 ай бұрын
@@theconkernatoryeah? 😮
@amerigo88
3 ай бұрын
Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers. Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.
@stevengill1736
3 ай бұрын
Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it. But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow! If they could have licked that multilayer problem it would have sped things up quite a bit. You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.
@klauszinser
3 ай бұрын
There must have been a speech of Demis Hassabis on 14 Nov 2017 in the late morning at the Society of Neuroscience in Washington. In this Keynote lecture where he told the audience that AI is nothing more than applied Brain Science. He must have said (I only have the translated German wording) 'First we solve the problem and understand whats intelligence (possibly the more German usage of the word) and then we solve all the other problems'. The 6000-8000 People must have been extremely quiet knowing what this young man already has achieved. Unfortunately I never found the video. (Source: Manfred Spitzer).
@honor9lite1337
3 ай бұрын
Studying in the late 60's? Even my dad was born in late 70's, how old are you?
@francescotron8508
3 ай бұрын
You always bring up interesting topics. Keep it up, it's great job 👍.
@JohnHLundin
3 ай бұрын
Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J
@jakobpcoder
3 ай бұрын
This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive
@HaHaBIah
3 ай бұрын
I love listening to this with our current modern context
@helloworldcsofficial
3 ай бұрын
this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.
@fibersden638
3 ай бұрын
One of the top education channels on KZitem for sure
@VaebnKenh
3 ай бұрын
It's pronounced Pæpert not Pāpert, and that was a bit of a confusing way to present the XOR function: since you set it up with a XY plot, you should have put the Inputs on different axes with the values in the middle. Other than that, great video as always 😊
@BobFrTube
3 ай бұрын
Thanks for bringing back memories of the class I took from Minsky and Papert (short, not long a in pronouncing his name) in 1969 just when the book had come out. You filled in some of the back story that I wasn't aware of.
@JiveDadson
3 ай бұрын
That book set AI back by decades.
@Wobbothe3rd
3 ай бұрын
Recurrent Neural Networks are about to make a HUGE comeback.
@FrigoCoder
3 ай бұрын
@@luciustarquiniuspriscus1408 MAMBA is already a valid alternative to transformers, and it is some kind of variant of linear recurrent neural networks. Also I do not see how could we avoid recurrent neural networks for music generation, they or their variants seem like a perfect fit for this very specific generation task.
@facon4233
3 ай бұрын
xLSTM FTW
@clray123
3 ай бұрын
@@luciustarquiniuspriscus1408 The SSM/Mamba papers already address this. In fact you can train a GPT-3 like small model using Mamba right here and now, with excellent performance (both in terms of training speed and outputs). With "infinite attention" (well, limited by the capacity of the hidden state vector).
@NanoAGI
2 ай бұрын
As always I love your videos, the depth of knowledge and the people that comment, as they all have interesting stories about what is in your videos. So one of the descendants of the symbolic movement was cognitive architectures like SOAR and ACT-R from Newell's theories of Cognition. Symbolic systems are not gone and they perform many tasks that Neural Networks don't do well. However Neural Networks do something so much better than Cognitive Systems, and that is in getting all the data and knowledge of the world into the neural network, and being able to extract it out. There is no way you can program all of that as rules in symbolic systems. There will be a merger of both systems so they can perform better reasoning and cognitive tasks in the next iteration of all of this. We are really just at the beginning, standing on the shoulders of Giants.
@JorgeLopez-qj8pu
3 ай бұрын
SEGA creating an AI computer in 1986 is crazy
@TheChipMcDonald
3 ай бұрын
The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".
@TerryBollinger
3 ай бұрын
The difficulty with Minsky's adamant focus on symbolic logic was his failure to recognize that the vast majority of biological sensory processing is dedicated to creating meaningful, logically usable symbolic representations of a complicated physical world. Minsky’s position thus was a bit like saying that once you understand cream, you have all you need to build a cow.
@rubes8065
3 ай бұрын
I absolutely love your channel. I look forward to your new videos. Thank you. I’ve learned sooo much 🥰
@danbaker7191
3 ай бұрын
Good summary. Ultimately, even today, there are no functionally useful and agreed definitions of intelligence and thinking. Maybe we're unintentionally approaching this from the back, by making things that sort of work, then later figuring out what's really going on (not yet!)
@theorixlux
3 ай бұрын
I am probably not the first, but i am surprised at how far back the idea of artificial "intelligence" goes to.
@lbgstzockt8493
3 ай бұрын
It surprises me how "little" progress we have made in that time. Pretty much every other discipline has made incredible leaps in the past 60-70 years, yet AI is still nowhere near the human brain. Obviously an early perceptron is infinitely worse than a modern LLM, but AGI doesnt really feel any closer than back then.
@theorixlux
3 ай бұрын
@@lbgstzockt8493 if you're comparing what a few smart computer geeks did over 80 years to what mother nature did over 3-ish billion years, then I would argue it's not surprising AT ALL that we haven't simulated a human brain yet...
@goldnutter412
3 ай бұрын
We've been here before Before the universe..
@theorixlux
3 ай бұрын
@@goldnutter412 ?
@AS40143
3 ай бұрын
The first idea of machines that could think appeared in the 17th century as Leibniz's mill concept
@alonalmog1982
3 ай бұрын
Wow! well explained, and a way more engaging story than what I expected.
@AaronSchwarz42
3 ай бұрын
People are like transistors, its how they are connected that makes all the difference
@freemanol
3 ай бұрын
I think there's one guy that doesn't receive much attention, Demis Hassabis. I knew him as the founder of the game company that made Republic: The Revolution, but he then went on to take a PhD in Neuroscience. I wondered why. Now it makes sense. He founded DeepMind
@bharasiva96
3 ай бұрын
What a fantastic vide tracing the history of Neural Nets. It would also be really useful if you could put up the links to the papers mentioned in the video in the description.
@subnormality5854
3 ай бұрын
Amazing that some of this work was done at Dartmouth during the days of 'Animal House'
@firstnamesurname6550
3 ай бұрын
Very nice and well scoped contextualization about the developement of NNs ... I know that the video is about an specific branch of computer science ... but the seminal work for AI research was not Alan Turing papers ... the seminal work for AI and Computer science is George boole's The Laws of Thought (1854) which contains Boolean algebra.
@LatentSpaceD
3 ай бұрын
Super happy i found you again! Your content is off the charts amazing! I wish i could patreon you up- im in my 50's autistic af and i dont have an income. Appreciate you.. p.s. i thought you said Rosenblat died in a tragic coding accident! Lmfao. Love the flatworms! Keep on keeping your valuble perception turned on!!
@perceptron-1
3 ай бұрын
I'm the PERCEPTRON Thank you for making this movie.
@MostlyPennyCat
3 ай бұрын
I took a generic algorithms and neural networks module at university. In the exam we would train and solve simple neutral networks on paper with a calculator. Good fun, this was in 2000.
@SB-qm5wg
3 ай бұрын
Well I learned a whole lot from this video. TY 👏
@ktvx.94
3 ай бұрын
Damn we're really going full cycle. We've been hearing eerily similar things from people in similar roles as folks in this video.
@Bluelagoonstudios
3 ай бұрын
Wow, didn't know they researched that back then, so long ago. Thank you for educating me on this matter. Today, AI is amazing already. I developed a USB reader/ tester with GPT4. The code that it wrote was spot on. The rest was just electronics, an amazing tool.
@hisuiibmpower4
3 ай бұрын
hebb's postulate are stilling being taught in neuroscience,only difference is a time element is being added its now called "spike time dependent plasticity"
@travcat756
3 ай бұрын
Minsky & Papert and the XOR problem was the invention of deep learning
@Ray_of_Light62
3 ай бұрын
I studied the perceptron in the '70s. My conclusion was, the hardware was not up to the task. Using a matrix of photoresistors as the input proved the design principle but couldn't bring to a working prototype.
@JiveDadson
3 ай бұрын
Before the multi-layer perceptron, statisticians used that exact same model with sigmoid activation functions and called the process "edge regression." The statisticians knew how to "train" the model using second-order multivariant optimization and "weight decay" methods, which were vastly superior to the ad hoc backpropagation methods that neural network researchers were still using as late as the 1980's. The neural net guys were blinded by their unwarranted certainty that they were onto something new.
@Alex.The.Lionnnnn
3 ай бұрын
I love how cheesy that name is. "The Perceptron!" Is it one of the good transformers or the bad ones??
@LimabeanStudios
3 ай бұрын
Working in physics one of the first things I learned is half the names are just "-tron" and it always makes me giggle
@Chimecho-delta
3 ай бұрын
Worth reading up on Walter Pitts! Interesting life and work
@yellow1pl
3 ай бұрын
Hi! Great fan of your channel! :) However this time I'm a bit puzzled. Several years ago I read somewhere Marvin Minsky talking about how he build this (awesome in my opinion) mechanical neural network. Since that time I was sure his network was the first. However here you talk about neural network build almost decade later and call it the first one... You mentioned that Marvin Minsky did some nerual network research previously, but he left. Ok, fine, so why his neural network that was bult before perceptron is not the first one in your opinion? :) Maybe next video? :) Also - to my knowledge Turings paper was published in 1937, not 36. In year 1936 Alonso Church published his paper related to Entscheidungsproblem. I don't know who was the second one that came up with theory of gravity or relativity, we don't usually remember them. But for some reason we remember Turing for being second in something :) Just fun fact :)
@DamianGulich
3 ай бұрын
There's more about this early history of artificial intelligence in this 1988 book: Graubard, S. R. (Ed.). (1988). The artificial intelligence debate: False starts, real foundations. MIT Press. The chapters also detail a very interesting discussion of related general philosophical problems and limitations of the time.
@gabotron94
3 ай бұрын
Would love to hear you talk about Doug Lenat's Cyc and what ever happened to that approach to AI
@0MoTheG
3 ай бұрын
When I first read about N.N. around 2000 this was still the state of the matter 30 years later. When I was at university NN were no topic. Then after 2010 things suddenly changed. Training Data and Flops had become available.
@noelwalterso2
3 ай бұрын
The title should be "the rise and rise of the perceptron" since it's the basic idea behind nearly all modern AI.
@rickharold7884
3 ай бұрын
Love it. Awesome summary.
@chinchenhanchi
3 ай бұрын
I was just studying this subject in university 😮 one of the many lectures was about the history of IA What a coincidence
@thomascorner3009
Ай бұрын
Thank you for this segment, and the asianometry (strange name 🙂) channel. Lots of interesting stuff. I have worked in the field of neural networks for many years, and what I find most striking is how the field has been plagued by researchers who project the most abstract brain functions onto mechanisms with negligible complexity (here Rosenblatt 's hyperboles about a couple of linear units with dynamics weights). This is bad for the image of the field (especially to the general public who finances these research) but also for the field itself where new ideas have to fight these oversimplifications to be recognized. The work by people like Stephen Gross and Stanislas Dehaene (who once in a talk at the Montreal Neuronal Institute in the 2000s likened the process of a human becoming conscient of some stimulus to a printer that turns on to print a document) are unfortunate examples. But global warming will probably make this a moot point anyway: human society's inability to manage the responsibilities that come from the technology that our brain has allowed us to develop (together with the profit-at-all-cost economic model used to exploit it) will destroy us before we can understand the organ that made it possible. What a shame...
@Charles-Darwin
3 ай бұрын
the 'boating accident' is peculiar
@youcaio
3 ай бұрын
Thanks!
@gscotb
3 ай бұрын
A significant moment is when the instructor leaves the plane & says "do a couple takeoffs & landings".
@cthutu
3 ай бұрын
Great great video. But the McCulloch-Pitts Neuron didn't use weights. You displayed diagrams showing weights whenever you mentioned it.
@unmanaged
3 ай бұрын
great video love the look back at currently used technology
@nexusyang4832
3 ай бұрын
I was just watching this video by Formosa TV and their piece of on the founder of SuperMicro. Just curious if there is any interest in the SuperMicro or its founder (for the English speaking folks that don't understand Mandarin hehe).... just thought I'd ask. 🙂
@sinfinite7516
3 ай бұрын
Great video :)
@warb635
3 ай бұрын
Russian vessels close to the Belgian coast (in international waters) are being closely watched these days...
@AndyLevinMilkandHoneyAvenue
3 ай бұрын
Terrific
@leannevandekew1996
3 ай бұрын
In 1996 neural networks were touted as predicting pollution from combustion sources without any need for chemical or visual monitoring.
@alexdrockhound9497
3 ай бұрын
looks like a bot
@leannevandekew1996
3 ай бұрын
@@alexdrockhound9497 Why'd you write "channel doesn't have any conte" on your channel ?
@alexdrockhound9497
3 ай бұрын
@@leannevandekew1996 typical bot. trying to deflect. Your profile is AI generated and you look just like adult content bots i see all over the platform.
@leannevandekew1996
3 ай бұрын
@@alexdrockhound9497 You totally are.
@anush_agrawal
3 ай бұрын
I would stalk you just as you said.
@GeorgePaul82
3 ай бұрын
Wow thats strange timing I'm in the middle of reading the book " The Dream Machine by Mitchell Waldrop" Have you read that yet ? Its about these exact same people
@pvtnewb
3 ай бұрын
As I recall, AMD's zen uarch also use some form of perceptron as their BTB or branch prediction
@-gg8342
3 ай бұрын
Very interesting topic
@luisluiscunha
3 ай бұрын
I needed a video to do the dishes, after spending a day making pedagogical materials on Stable Diffusion. Now I will rewind and delight myself seeing this video carefully. *Thank you*
@perceptron-1
3 ай бұрын
It is not enough to digitally model the most common LLM for Artificial Intelligence today, it doesn't matter if it is 1-bit or 1 TRIT 1.58b = log2(3), it has to be done with working ANALOG hardware! If software, then Machine Learning (algorithm) If hardware, then Learning Machine (Hardware that is better and faster than an algorithm)
@renanmonteirobarbosa8129
3 ай бұрын
MLPs are very prominent still. Also Attractor NNs are very popular, transformers would not exist without ANNs.
@darelsmith2825
3 ай бұрын
ELIZA: "Cat got your tongue?" I had a Boolean Logic class @ LSU. Very interesting.
@AABB-px8lc
3 ай бұрын
I see what you do there. 3030 year: History of AI essey: As we know, our new hyperdeepinnercurlingdoubleflashing neural network almost working, we need few more tiny touches and literally 2 extra layers to show it awesomeness in next 3031year. And again, and again.
@onetouchtwo
3 ай бұрын
FYI, XOR is pronounced “ex-or” like “ECK-sor”
@londomolari5715
3 ай бұрын
I find it ironic/devious that Minsky criticized Perceptrons for inability to scale. None of the little toy systems that came out of MIT or Yale (Schank) scaled.
@kevin-jm3qb
3 ай бұрын
As a fellow 4 hour sleeper. Any advice on brain health. I'm getting paranoid.
@harambetidepod1451
3 ай бұрын
My CPU is a neural-net processor; a learning computer.
@Phil-D83
3 ай бұрын
Minsky is currently frozen, waiting for return after his untimely death in 2016 or so
@Anttisinstrumentals
3 ай бұрын
Every time i hear word multifaceted i think of chatgpt.
@mattheide2775
3 ай бұрын
I enjoy this channel more than I understand the subjects covered. I worry that AI will be a garbage in garbage out product. It seems like a product forced upon me and I don't like it at all. Thanks for the video.
@JohnVKaravitis
3 ай бұрын
0:12 Is that Turing on the right
@fintech1378
3 ай бұрын
Yuxi in the Wired, any audio essay?
@DarkShine101
3 ай бұрын
Part 2 when?
@halfsourlizard9319
3 ай бұрын
symbolic AI was a neat idea ... rip
@jamillairmane1585
3 ай бұрын
Great entry, very à propos!
@jamesjensen5000
3 ай бұрын
Is every cell conscious?
@iRiShNFT
Ай бұрын
Your Audio is always WAY too low compared to everything else online ... have to turn speakers up too high and then back down after your videos... No other notes , Love your videos .,... nobody else is going to teach us this nerdy shit =)
@robertpearson8546
3 ай бұрын
Threshold gates are NOT Boolean circuits. Boolean logic is a subset of Threshold logic. You can simulate Boolean gates with Threshold gates but not vice versa. Threshold logic gates are not neural networks. Neural networks use simulated threshold logic gates.
@smoggert
3 ай бұрын
🎉
@ahnabarnob5004
24 күн бұрын
Now people use neural network to draw furry pictures🙂
@vikramgogoi3621
3 ай бұрын
Shouldn't it be "the" Principia Mathematic?
@ReadThisOnly
3 ай бұрын
asianometry my goat
@LydellAaron
3 ай бұрын
The first neutral network theory is/was valid. The computing hardware is catching up. All the equations are valid with wave or wave states.
@Frostbytedigital
3 ай бұрын
ALL the equations from the original perceptron are correct if you sub in waves or wave states? That's fascinating. Any sources?
@LydellAaron
3 ай бұрын
@@Frostbytedigital Yes, totally fascinating. In many cases, you just have to see the equivalence in mathematical form of sum of products form like in 2:35. In some cases, you just sub in a complex number for the most part. I modeled a polychromatic light particle in our recent wave-based patent where we expand the equation of a photon c=lamda*nu as a sum of products. I filled it under my company "Calective." Also look up "Higher dimensional quantum computing" by Sabre Kais and Barry Sanders.
@sunroad7228
3 ай бұрын
“In any system of energy, Control is what consumes energy the most. No energy store holds enough energy to extract an amount of energy equal to the total energy it stores. No system of energy can deliver sum useful energy in excess of the total energy put into constructing it. This universal truth applies to all systems. Energy, like time, flows from past to future” (2017). Inside Sudan’s Forgotten War - BBC Africa Eye documentary kzitem.info/news/bejne/rH96s6eXpYt5em0
@marshallbanana819
3 ай бұрын
This guy has been messing with us for so long I can't tell if the "references and sources go here" is a bit, or a mistake.
@Finnishpeasant
3 ай бұрын
Didn't I build this in Alpha Centauri?
@georhodiumgeo9827
3 ай бұрын
An explanation of perceptrons and where did they go???... Get the heck out of my head, I was literally just wondering about this.
@AngelosLakrintis
3 ай бұрын
Everything old is new again
@0x00official
3 ай бұрын
And people still think AI is a new technology
@halfsourlizard9319
3 ай бұрын
protip: 'xor' is pronounced 'x or' not 'x-o-r'
@MorgothCreator
3 ай бұрын
Nothing has changed, currently AI is in the same state as then, as money grab promising dreams.
@dcamron46
3 ай бұрын
“A book called Principia Mathematica”, you mean Newton’s Principia..?
@DSCH4
Ай бұрын
Russell and Whitehead. An attempt to ground mathematics in perfect rigor via type theory.
Пікірлер: 210