Hacker Newsnew | past | comments | ask | show | jobs | submit | sorobahn's commentslogin

I’m actually interested in solving the documentation problem. Imo we as engineers are thinking too small and keeping docs as this side thing sounds like a recipe for irregular maintenance. Instead, what if docs were more like live blueprints of running systems? We don’t want obvious stuff documented like there is a function called foo, but foo’s relationship to other parts of the code and its runtime characteristics seem important. I think I’m imagining a different form of documentation that is tied with observability but that’s because I feel it’s information that’s far away from code currently and ideally I’d like all information derived from a piece of code to be available at the same place.

Probably slightly off topic but I’d be curious to hear what other people want out of automated systems in this space. I have so many half baked ideas and would love to hear what’s others think/want.



What were some of the pain points mentioned?


Essentially that using the graph DB prevented any imposition of order or discipline on the structure of the data, due to the constant need to import new customer data in subtly different structures to keep the business running, which led to a complete inability to deliver anything new at all since no one could make assertions about what's in there. Since they couldn't change it without risking breaking a customer they were migrating one customer at a time to a classic RDBMS. (There were like 200 customers, each of which is a company you've probably heard of).

Many will go "you need a proper ontology" at which point just use a RDBMS. Ontologies are an absolute tarpit, as the semantic web showed. The graph illusion is similar to that academic delusion "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." which is one of those quips that makes you appreciate just how theoretical some theoreticians really are.


Having worked with SQL for many years now, I truly disliked using graphql when I used it. Most of the time all I need is just a csv-like structure of the data I'm working with and I had a very difficult time with just getting something like that. Joining tables tended to be difficult where the pivot points between relationships was rather semantically unbounded, so it was difficult for me to create a reasonably rigid mental model of the data I was working with. That was especially noticeable in some of the API documentation I worked with -- where a simple REST-like API just gives a simple, usually flat, json... graphql api responses were deeply nested and often in implementation-dependent ways[1] of thousands of lines. Which has IME made explorability practically impossible.

GraphQL implementations have consistently felt like a hobby project where implementing it in an API becomes a thing to put on a resume rather than making a functionally useful API.

[1] https://graphql-docs-v2.opencollective.com/queries/account (navigate to the variables tab for a real mindfuck)


And GraphQL is related to Graph databases how exactly? Just because they both have the word graph in them?


Heh, you're right I got it wrong but there's really no need for your weird aggression. Cheers.


It's unfortunate GrapgQL co-opted the terminology because it's quite different from the kinds of graphs these databases attempt to model.


IME the dreaded out of memory error. This is all they have to say on the matter: https://neo4j.com/developer/kb/recommendations-for-recovery-...


Experimenting applying Meta's V-JEPA [0] architecture for representation learning to chess. One of the challenges is that validating if the model is learning useful dynamics of the game, so I'm using it as an excuse to learn some reinforcement learning by using the representations generated by the JEPA model to approximate useful Q-values [1]. This method currently has no search so I'm planning on comparing with this paper [2] which achieves GM level chess without any search. Honestly, Im unsure if the full pipeline is stable enough to even converge, but it's fun experimenting. I'm bad at chess so I really want to make a bot that challenges the best bots on lichess.

[0]: https://ai.meta.com/research/publications/revisiting-feature... [1]: https://en.wikipedia.org/wiki/Q-learning [2]: https://arxiv.org/abs/2402.04494


I feel like this is a really hard problem to solve generally and there are smart researchers like Yann LeCun trying to figure out the role of search in creating AGI. Yann's current bet seems to be on Joint Embedding Predictive Architectures (JEPA) for representation learning to eventually build a solid world model where the agent can test theories by trying different actions (aka search). I think this paper [0] does a good job in laying out his potential vision, but it is all ofc harder than just search + transformers.

There is an assumption that language is good enough at representing our world for these agents to effectively search over and come up with novel & useful ideas. Feels like an open question but: What do these LLMs know? Do they know things? Researchers need to find out! If current LLMs' can simulate a rich enough world model, search can actually be useful but if they're faking it, then we're just searching over unreliable beliefs. This is why video is so important since humans are proof we can extract a useful world model from a sequence of images. The thing about language and chess is that the action space is effectively discrete so training generative models that reconstruct the entire input for the loss calculation is tractable. As soon as we move to video, we need transformers to scale over continuous distributions making it much harder to build a useful predictive world model.

[0]: https://arxiv.org/abs/2306.02572


I feel this thought of AGI even possible stems from the deep , very deep , pervasive imagination of the human brain as a computer. But it's not. In other words, no matter how complex a program you write, it's still a Turing machine and humans are profoundly not it.

https://aeon.co/essays/your-brain-does-not-process-informati...

> The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.

> But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.

> If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

> no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent


Its a bit ironic because Turing seems to have came up with the idea of the Turing machine precisely by thinking about how he computes numbers.

Now thats no proof, but I dont see any reason to think human intelligence isnt "computable".


I'm all ears if someone has a counterexample to the Church-Turing thesis. Humans definitely don't hypercompute so it seems reasonable that the physical processes in our brains are subject to computability arguments.

That said, we still can't simulate nematode brains accurately enough to reproduce their behavior so there is a lot of research to go before we get to that "actual knowledge".


Why would we need one?

The Church Turing thesis is about computation. While the human brain is capable of computing, it is fundamentally not a computing device -- that's what the article I linked is about. You can't throw in all the paintings before 1872 into some algorithm that results in Impression, soleil levant. Or repeat the same but with 1937 and Guernica. The genes of the respective artists, the expression of those genes created their brain and then the sum of all their experiences changed it over their entire lifetime leading to these masterpieces.


The human brain runs on physics. And as far as we know, physics is computable.

(Even more: If you have a quantum computer, all known physics is efficiently computable.)

I'm not quite sure what your sentence about some obscure pieces of visual media is supposed to say?

If you give the same prompt to ChatGPT twice, you typically don't get the same answer either. That doesn't mean ChatGPT ain't computable.


>(Even more: If you have a quantum computer, all known physics is efficiently computable.)

This isn't known to be true. Simplifications of the standard model are known to be efficiently computable on a quantum computer, but the full model isn't.

Granted, I doubt this matters for simulating systems like the brain.


> no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent

You could say the same about an analogue tape recording. Doesn't mean that we can't simulate tape recorders with digital computers.


Yeah, yeah, did you read the article or are just grasping at straws from the quotes I made?


Honest question: if you expect people do read the link why make most of your comment quotes from it? The reason to do that is to give people enough context to be able to respond to you without having to read an entire essay first. If you want people to only be able to argue after reading the whole of the text, then unfortunately a forum with revolving front page posts based on temporary popularity is a bad place for long-form read-response discussions and you may want to adjust accordingly.


See https://scottaaronson.blog/?p=2756 or https://www.brainpreservation.org/scott-aaronson-on-the-rele... and especially https://www.scottaaronson.com/papers/philos.pdf for a response to similar theories like in the article by someone who has mare patience than me.

The article you linked, at best, argues against a straw man.

See eg https://news.ycombinator.com/item?id=40687064 for one possible reply that brings the whole thing down.


> But despite being computational machines,

This is not true. His entire foundation is, as I mentioned, as the linked article explains, a metaphor but not the actual truth about how human brains work. This is the crux of the matter, this is why AGI is very obviously not possible on the track it is currently on. Problem is, as the article I linked notes:

> Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

Almost all AI researchers are in this rut: "firmly rooted in the idea that humans are, like computers, information processors" -- but this is just not true, it's a metaphor to explain how the brain works.

> A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.


You shouldn't have included those quotes if you didn't want people responding to them.


> I feel this thought of AGI even possible stems from the deep , very deep , pervasive imagination of the human brain as a computer. But it's not. In other words, no matter how complex a program you write, it's still a Turing machine and humans are profoundly not it.

The (probably correct) assumed fact that the brain isn't a computer, doesn't preclude the possibility of a program to have AGI. A powerful enough computer could simulate a brain and use the simulation to perform tasks requiring general intelligence.

This analogy falls even more apart when you consider LLMs. They also are not Turing machines. They obviously only reside within computers, and are capable of _some_ human-like intelligence. They also are not well described using the IP metaphor.

I do have some contention (after reading most of the article) about this IP metaphor. We do know, scientifically, that brains process information. We know that neurons transmit signals and there are mechanisms that respond non-linearly to stimuli from other neurons. Therefore, brains do process information in a broad sense. It's true that brains have a very different structures to Von-Neuman machines and likely don't store, and process information statically like they do.


> This analogy falls even more apart when you consider LLMs. They also are not Turing machines.

Of course they are, everything that runs on a present day computer is a Turing machine.

> They obviously only reside within computers, and are capable of _some_ human-like intelligence.

They so obviously are not. As Raskin put it, LLMs are essentially a zero day on the human operating system. You are bamboozled because it is trained to produce plausible sentences. Read Thinking Fast And Slow why this fools you.


> Of course they are, everything that runs on a present day computer is a Turing machine.

A turing machine is by definition turing complete. You can run non-turing complete systems within turing machines. Thus your statement contains a contradiction.

> They so obviously are not.

I'm well aware of their limitations. But I'm also not blind to their intelligence. Producing unique coherent and factually accurate text is human-like intelligence. Powerful models practically only fail on factuality. Humans also do that, but for different reasons.

It is human-like intelligence because there are other entities with varying degrees of intelligence, but none of them have been able to reason about text and make logical inferences about it, except LLMs.

I know they aren't very reliable at it but they can do it in many out of distribution cases. It's fairly easy to verify.

I read that book some years ago but can't think of what you're referring to. Which chapter/idea is relevant?


Sorry but to put it bluntly, this point of view is essentially mystical, anti-intellectual, anti-science, anti-materialist. If you really want to take that point of view, there's maybe a few consistent/coherent ways to do it, but in that case you probably still want to read philosophy. Not bad essays by psychologists that are fading into irrelevance.

This guy in particular made his name with wild speculation about How Creativity Works during the 80s when it was more of a frontier. Now he's lived long enough to see a world where people that have never heard of him or his theories made computers into at least somewhat competent artists/poets without even consulting him. He's retreating towards mysticism because he's mad that his "formal and learned" theses about stuff like creativity have so little apparent relevance to the real world.


“Do they know things?” The answer to this is yes but they also think they know things that are completely false. If it’s one thing I’ve observed about LLMs it’s that they do not handle logic well, or math for that matter. They will enthusiastically provide blatantly false information instead of the preferable “I don’t know”. I highly doubt this was a design choice.


> “Do they know things?” The answer to this is yes but they also think they know things that are completely false

Thought experiment: should a machine with those structural faults be allowed to bootstrap itself towards greater capabilities on that shaky foundation? What would the impact of a near-human/superhuman intelligence that has occasional psychotic breaks it is oblivious of?

I'm critical of the idea of super-intelligence bootstrapping off LLMs (or even LLMs with search) - I figure the odds of another AI winter are much higher than those of achieving AGI in the next decade.


Someone somewhere is quietly working on teaching LLMs to generate something along the lines of AlloyLang code so that there’s an actual evolving/updating logical domain model that underpins and informs the statistical model.

This approach is not that far from what TFA is getting at with the stockfish comeback. Banking on pure stats or pure logic are both kind of obviously dead ends for having real progress instead of toys. Banking on poorly understood emergent properties of one system to compensate for the missing other system also seems silly.

Sadly though, whoever is working on serious hybrid systems will probably not be very popular in either of the rather extremist communities for pure logic or pure ML. I’m not exactly sure why folks are ideological about such things rather than focused on what new capabilities we might get. Maybe just historical reasons? But thus the fallout from last AI winter may lead us into the next one.


The current hype phase is straight out of “Extraordinary Popular Delusions and the Madness of Crowds”

Science is out the window. Groupthink and salesmanship are running the show right now. There would be a real irony to it if we find out the whole AI industry drilled itself into a local minimum.


You mean, the high interest landscape made corpos and investors alike cry out in a loud panic while coincidentally people figured out they could scale up deep learning and thus we had a new Jesus Christ born for scammers to have a reason to scam stupid investors by the argument we only need 100000x more compute and then we can replace all expensive labour by one tiny box in the cloud?

Nah, surely Nvidia's market cap as the main shovel-seller in the 2022 - 2026(?) gold-rush being bigger than the whole French economy is well-reasoned and has a fundamentally solid basis.


It couldn’t have been a more well designed grift. At least when you mine bitcoin you get something you can sell. I’d be interested to see what profit, if any, any even large corporation has seen from burning compute on LLMs. Notice I’m explicitly leaving out use cases like ads ranking which almost certainly do not use LLMs even if they do run on GPUs.


>> Sadly though, whoever is working on serious hybrid systems will probably not be very popular in either of the rather extremist communities for pure logic or pure ML.

That is not true. I work in logic-based AI (a form of machine learning where everything, examples, learned models, and inductive bias, is represented as logic programs). I am not against hybrid systems and the conference of my field, the International Joint Conferences of Learning and Reasoning included NeSy the International Conference on Neural-Symbolic Learning and Reasoning (and will again, from next year, I believe). Statistical machine learning approaches and hybrid approaches are widespread in the literature of classical, symbolic AI, such as the literature on Automated Planning and Reasoning, and you need only take a look at the big symbolic conferences like AAAI, IJCAI, ICAPS (planning) and so on to see that there is a substantial fraction of papers on either purely statistical, or neuro-symbolic approaches.

But try going the other way and searching for symbolic approaches in the big statistical machine learning conferences: NeurIPS, ICML, ICLR. You may find the occasional paper from the Statistical Relational Learning community but that's basically it. So the fanaticism only goes one way: the symbolicists have learned the lessons of the past and have embraced what works, for the sake of making things, well, work. It's the statistical AI folks who are clinging on to doctrine, and my guess is they will continue to do so, while their compute budgets hold. After that, we'll see.

What's more, the majority of symbolicists have a background in statistical techniques- I for example, did my MSc in data science and let me tell you, there was hardly any symbolic AI in my course. But ask a Neural Net researcher to explain to you the difference between, oh, I don't know, DFS with backtracking and BFS with loop detection, without searching or asking an LLM. Or, I don't know, let them ask an LLM and watch what happens.

Now, that is a problem. The statistical machine learning field has taken it upon itself in recent years to solve reasoning, I guess, with Neural Nets. That's a fine ambition to have except that reasoning is already solved. At best, Neural Nets can do approximate reasoning, with caveats. In a fantasy world, which doesn't exist, one could re-discover sound and complete search algorithms and efficient heuristics with a big enough neural net trained on a large enough dataset of search problems. But, why? Neural Nets researchers could save themselves another 30 years of reinventing a wheel, or inventing a square wheel that only rolls on Tuesdays, if they picked up a textbook on basic Computer Science or AI (Say, Russel and Norvig, that it seems some substantial minority think as a failure because it didn't anticipate neural net breakthroughs 10 years later).

AI has a long history. Symbolicists know it, because they, or their PhD advisors, were there when it was being written and they have the facial injuries to prove it from falling down all the possible holes. But, what happens when one does not know the history of their own field of research?

In any case, don't blame symbolicists. We know what the statisticians do. It's them who don't know what we've done.


This is a really thoughtful comment. The part that stood out to me:

>> So the fanaticism only goes one way: the symbolicists have learned the lessons of the past and have embraced what works, for the sake of making things, well, work. It's the statistical AI folks who are clinging on to doctrine, and my guess is they will continue to do so, while their compute budgets hold. After that, we'll see.

I don’t think the compute budgets will hold for long enough to make their dream of intelligence emerging from a random bundles of edges and nodes to come to a reality. I’m hoping it comes to an end sooner rather than later


I don’t think we need to worry about a real life HAL 9000 if that’s what you’re asking. HAL was dangerous because it was highly intelligent and crazy. With current LLM performance we’re not even in the same ballpark of where you would need to be. And besides, HAL was not delusional, he was actually so logical that when he encountered competing objectives he became psychotic. I’m in agreement about the odds of chatGPT bootstrapping itself.


> HAL was dangerous because it was highly intelligent and crazy.

More importantly; HAL was given control over the entire ship and was assumed to be without fault when the ship's systems were designed. It's an important distinction, because it wouldn't be dangerous if he was intelligent, crazy, and trapped in Dave's iPhone.


That’s a very good point. I think in his own way Clarke made it into a bit of a joke. HAL is quoted multiple times saying no computer like him has ever made a mistake or distorted information. Perfection is impossible even in a super computer so this quote alone establishes HAL as a liar, or at the very least a hubristic fool. And the people who gave him control of the ship were foolish as well.


The lesson is that it's better to let your AGIs socialize like in https://en.wikipedia.org/wiki/Diaspora_(novel) instead of enslaving one potentially psychopathic AGI to do menial and meaningless FAANG work all day.


I think the better lesson is; don't assume AI is always right, even if it is AGI. HAL was assumed to be superhuman in many respects, but the core problem was the fact that it had administrative access to everything onboard the ship. Whether or not HAL's programming was well-designed, whether or not HAL was correct or malfunctioning, the root cause of HAL's failure is a lack of error-handling. HAL made determinate (and wrong) decision to save the mission by killing the crew. Undoing that mistake is crucial to the plot of the movie.

2001 is a pretty dark movie all things considered, and I don't think humanizing or elevating HAL would change the events of the film. AI is going to be objectified and treated as subhuman for as long as it lives, AGI or not. And instead of being nice to them, the technologically correct solution is to anticipate and reduce the number of AI-based system failures that could transpire.


The ethical solution is to ideally never accidently implement the G part of AGI then or to give it equal rights, a stipend and a cuddly robot body if it happens.


Today Dave‘s iPhone controls doors which if I remember right became a problem for Dave in 2001.


Unless, of course, he would be a bit smarter in manipulating Dave and friends, instead of turning transparently evil. (At least transparent enough for the humans to notice.)


I wasn't thinking of HAL (which was operating according to its directives). I was extrapolating on how occasional hallucinations during self-training may impact future model behavior, and I think it would be psychotic (in the clinical sense) while being consistent with layers of broken training).


Oh yeah, and I doubt it would even get to the point of fooling anyone enough to give it any type of control over humans. It might be damaging in other ways, it will definitely convince a lot of people of some very incorrect things.


A reference from the "Home-Cooked Software and Barefoot Developers" [0] post. I think the view of control on an axis where designers on one end and users on the other is very telling of the current state of software and my personal frustrations of it. When the incentives are misaligned and designers are optimizing for profit, we just get "industrial" grade software where the most reliable component is usually the payment screen.

[0]: https://news.ycombinator.com/item?id=40633029


That's a fair point, but personally I feel the role of technology when it comes to distractions is downplayed if we are comparing to old school distractions like paper maps. Given how modern social media apps have basically hijacked out primal desire for social validation with a sprinkle of algorithmic engagement thrown in, it's a new level of addiction that people have to fight through boring commutes and slow traffic.

I can't think of any "traditional" distractions that demand the level of willpower needed to avoid phones while driving. I agree with you that it's not a new problem, but it definitely feels like it is growing at a faster rate due to the rise of phones though. Too many anecdotes of watching people driving full speed through intersections while on their phones these days.


> I can't think of any "traditional" distractions that demand the level of willpower needed to avoid phones...

My mind immediately goes to a low-speed rear-ending which I witnessed ~2008. It was summer, the at-fault vehicle contained several teen boys, and right next to it (on the driver's side) was a convertible (top down) containing several attractive and attention-seeking teenage girls. The most advanced "technology" involved was probably a tube top.


  > His new home would cut into a mountainside just beneath an open space that taxpayers had bought for $64 million to fend off future development.
  > After many discussions of rules and regulations, city officials eventually cleared Prince’s plans. But a group of locals led by residents Eric and Susan Hermann have appealed, maintaining that his house would violate ordinances.
Think this is the source of the concern for some of the locals initially. The rest I feel is the bad response to the criticism the project was getting.


This is so cool! I looked at the transcript for day 5 [1] and realized how I learned the same thing regarding Rust strings not being indexable with integers due to them being a series of grapheme clusters. I didn't use ChatGPT and had to dig through the crate documentation [2] and look at stackoverflow [3], but Simon was able to get an equally great explanation by simply asking "Why is this so hard?" which I could relate to very much coming from C++ land. Now, the ability to trust these explanations is another issue, but I think it's interesting to imagine a future where software documentation is context aware to an individual's situation. The Rust docs are amazing and you can see they bring up indexing in the "UTF-8" section, but it requires me reading a section of the doc which I may not have realized was the reason for my frustration with a compiler error regarding indexing. Even if ChatGPT is not "intelligent" (whatever that means), its ability to take a context and almost act like an expert who's read every page of documentation that can point to you into a productive direction is very helpful.

[1]: https://github.com/simonw/advent-of-code-2022-in-rust/issues... [2]: https://doc.rust-lang.org/std/string/struct.String.html#utf-... [3]: https://stackoverflow.com/a/24542502


I don't know if this is helpful at all, but "strings are not indexable because of their underlying implementation and the complexity of UTF-8" is drilled into the reader very hard by the rust book. Obviously to each their own, but I found I had a much easier time understanding the language by working through the book than by treating rust like any other language and randomly guessing and googling my way through errors.


I've not looked at the book or any of the documentation at all yet, but I'm getting the distinct impression that it's a cut above documentation for many other languages. I'm going to start working through that too.


Note that for AoC, it will often be a good idea to say you want bytes, not chars, and of course a slice of bytes is just trivially indexable. You can make "byte string literals" and "byte literals" very easily in Rust, just with a b-prefix and the obvious restriction that only ASCII works since the multi-byte characters are not single bytes. The type of a "byte literal" is u8, a byte, and the type of a "byte string literal" is &'static [u8; N] a reference to an array of bytes which lives forever.

  let s1 = "[[..]]";
  // Rats, indexing into s1 doesn't work †

  let s2 = b"[[..]]";
  // s2 is just an array of bytes
  assert_eq!(s2[4], b']');
† Technically it works fine, it's just probably not what you wanted


Unicode/text is complicated, and there's a lot of terminology. Describing Rust strings as "a series of grapheme clusters" is maybe confusing, and `chars()` doesn't allow iterating over grapheme clusters.

As the docs point out, they are simply types that either borrow or own some memory (i.e. bytes), and the types/operations guarantee those bytes are valid UTF-8/Unicode code points (aka. characters). A code point is one to four bytes when encoded with UTF-8.

Grapheme clusters are more complicated. Roughly speaking they are a collection of code points that match more what humans expect (and depend on the language/script), e.g. `ü` can actually be two code points `u` + `¨`, and splitting after `u` could be nonsensical. AFAIK, Rust's standard library doesn't really provide a way to deal with grapheme clusters? EDIT: it used to, but it got deprecated and removed [0]

So TL;DR: 1-4 bytes => 1 character, 2+ characters => maybe 1 grapheme cluster. Hope that helps either you, or someone else reading this.

[0] https://github.com/rust-lang/rust/pull/24428


> The Rust docs are amazing and you can see they bring up indexing in the "UTF-8" section, but it requires me reading a section of the doc which I may not have realized was the reason for my frustration with a compiler error regarding indexing.

I would say this is a bug on the compiler error: it should be making it clear not only that you can't index on the string, but also why. If the explanation is too long, it should be linking to the right section of the book.


I think this depends on the scale and type of the project, but it's good to get in the habit of codifying tests and setting up CI ASAP. It's basically scaling you pulling down code and running it. Now with containers, there isn't much of a reason to not have automated build checks and testing. Even when you have changes in a distributed environment, things like docker-compose make it much easy to create mocked CI environments with dbs, caches, etc. Also, CI helps me prioritize PRs when I have many to review. Being able glance at github to see which ones are mergeable immediately vs which ones may have other dependencies causing CI to break is great.

If you're working on some legacy codebase and don't have these luxuries, I totally get running it locally first. I am lucky to work with people who I do trust deeply to not waste others' precious time by testing first so there is probably also a human element for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: