GoreTex is a bad example - it's gonna delaminate after a year or so of heavy use and is pretty much impossible to repair after that. Which also undercuts ACRONYM's messaging about their GoreTex products being some kind of like, buy-it-for-life rainjacket.
I’m sure there’ll be a PS6, but it honestly seems like non-portable consoles are on their way out. Wouldn’t be surprised if there’s no next-gen TV console XBox at all, or if it’s just a MS branded gaming PC.
I can't figure out how that makes sense. How could there not still be sigificant demand for a living room console that isn't underpowered from the handheld tradeoffs? As kids keep becoming adults their lifestyle evolves away from regularly taking their console to school and their friends houses.
The current graphics are good enough for the mainstream, and with modern SoCs, especially with AI frame generation and upscaling, can be run in a mobile form factor with low power requirements and still have great graphics. Hell, even my iPhone can run some games that look great, assuming they’re built with the hardware in mind.
Nowadays, a decent GPU (not even high end) is $700 whereas I can get a full console with a 120hz screen and built in controls for $450 (switch 2). That’s the mass market.
Consoles have traditionally been underpowered PCs but are shifting to a new product category. There’s no real downside to the portable form factor besides a lack of processing power; everything else is upside.
I am similarly dubious of the idea that TV consoles will be abandoned. Handhelds necessarily require low power. Even the steam deck is considered too heavy for many users, and it struggles to run present generation games at 800p.
Nintendo is in a special situation. They have a successful handheld line established in the Gameboy. Their games tend not to be realistic, instead going for more cartoon-ey or Anime style that can be done with low poly counts and lower resolution textures. Nintendo didn't abandon their home console so much as they merged with with their existing handheld console.
Total agreement here. The handhelds have to make extreme compromises to fit in the power/heat/performance envelope. I would guess that most Steam Deck hours are clocked within someone’s home vs on the go.
I assume the screen on the Deck/Switch is the most expensive component. Are Microsoft+Sony really going to push units where consumers have to pay for a feature they know will go unused? I guess so long as they offer a screen-less “portable” for $$$ off the price.
> As kids keep becoming adults their lifestyle evolves away from regularly taking their console to school and their friends houses.
Yes but adults like myself want to sit in the living room and play on handheld while the wife or kids watch tv. This is what I’ve been doing for years with my Steamdeck and Switch.
I mean current gen consoles are already massively underpowered(in the literal sense) compared to actual PCs - PS5 uses like 120-150W when running, that's laptop terittory(in fact it's half of my gaming laptop). Clearly they could do a lot more with a higher power budget, but are choosing not to because the gains are not worth the increased heat and noise.
And with modern upscaling solutions I think you'd be surprised what can be done in a small package - so in that way a "portable" PS6 that can be used in or outside of the dock makes perfect sense.
>>As kids keep becoming adults their lifestyle evolves away from regularly taking their console to school and their friends houses.
You see, as I got older and got my own kids I lost appetite for playing on the TV or my gaming PC. I'd rather have something portable so I can play next to my wife on the sofa. That's why my Steam Deck is seeing 10x more use than my RTX5090 desktop PC.
From someone that worked in the industry ~6 years ago, it's clearly going well for them - frankly, they're expanding and scaling way faster than I would have thought possible in 2019. They've got something like 6 cities running right now and what, 3-4 more announced?
Another thing to keep in mind is that rideshare revenue in the US is extremely geographically concentrated in urban cores. This is why every AV company was targeting SF as their first city (excepting Waymo, which did some stuff in PHX). 'Hyperfocused expansion' probably looks a lot closer to tackling new, novel areas in different metro areas rather than, say, expanding down in to San Jose and the central valley.
These things, they take time.
They've clearly hit (or projections confidently show they'll hit) a point where each car is profitable. I worked in the space for a while - platform upgrades (new cars, sensors, etc) are planned out years in advance and are pretty complex processes. But generally, each upgrade was a massive decrease in cost per car. (usually 50% cheaper or more). So also possible they want to wait for the next platform transition.
Weird, I thought it was one of the best movies I've seen in the last few years. Wasn't at all what I expected to see, but was incredibly memorable and impactful.
F1 on the other hand was maybe the worst offender as far as literalism is concerned.
Yeah, F1 was extremely literal - characters would often describe what's going on in Brad Pitt's head while he's driving. On the other hand, it's a "big, dumb action movie" and at least it took itself seriously and didn't wink at the audience like so many modern blockbusters do.
Dude, I explicitly replied that I pinpointed it to Next. The moment I removed the Image tag and replaced it with a regular <img />, it resolved itself. I didn’t even say I was using three, did I?
My above text was written before you replied to my other message.
And besides, just because using <img> instead of <Image> doesn't imply that the actual issue was in <Image>, it might also be in another part and <Image> triggered it. As I told you, as long as you don't pinpoint the problem, you can't say where the problem lies.
Maybe your definition of pinpointing is different than mine.
Don’t ever use Next. Terrible developer experience, vendor lock in, weird undocumented conventions that make building anything other than some kind of B2B SaaS CRUD site full of undocumented foot guns. My favorite thing I’ve encountered is the Next <Image /> tag somehow dropping the FPS on a webgl scene on the same page to 2 FPS.
How was Vercel able to frog-boil normal React users with vendor lock-in? React was supposed to be Meta's baby and open source was supposed to defeat vendor lock-in.
If you are new to React and just figuring out how to get it running, you will likely end up on this page. The first recommendation is Next.js.
The real best way for a beginner to start is IMO Vite. Comes with everything you need to get started and lets you choose what to do next. Curiously, the link to Vite only appears at the very bottom of the page and is implied to be only for those not already served by other options. Wink wink nudge nudge.
because they pushed shiny new features that are reaaaallly good for a certain set of commercial users (think webshops, where time to first contentful paint equals time to money)
and the tech is not bad, it's just meh (immature and a bit misguided) after all
by flipping the whole thing upside down, defaulting to server-side, a lot of previously hard problems became easy (the usual glueing of different APIs - user, CMS, metadata, "security", adtech, blablabla - translate to `const user = await auth();` and so on, and still after processing the request emitting a React page is kind of nice, Server Actions are also nice because Next manages the API URLs for you), and since mobile technology (phones and networks) evolved a lot it's not a problem to do a request for each page (especially on webshops where Next prefetches the ha$$y path))
Can you name/explain your problems with them? What was the intended use case and goals, what did you like about them, and what you didn't, and what was something that was definitely missing for you?
Svelte is very immature in terms of tooling and IDE support and that hasn't seemed to improve in the last 7 years of using it. Libraries are sub-par vs React and the design of the language does not favor making things explicit. $:{} for example is not explicit, I'd much rather use a slightly longer token (like useEffect) to help document what's going on. That's just one example, it's riddled with things like that (e.g keyed foreach, store $ syntax). Also calling all files +page.svelte leads to such a messy experience in the IDE. There are other problems too I'm not thinking of. I think a lot of the draw of Svelte comes from it "feeling" very fast (both to build and to iterate on) when you first start using it, but imo using it for anything non-trivial is a nightmare.
Built in scoped CSS is a great idea and compiled bundle is nice. I used it for multiple projects (order system, admin panels, etc) and it always feels that little bit more brittle when I use Svelte vs something more mature. I won't use it again for any project.
I have less experience with Next and it was longer ago but iirc I found it needlessly hard to use and deploy as well as some problems with Image tags (you need the exact size of the images?). I also found their gamified tutorial really off-putting and indicative of corporate bloat which I guess makes sense now that they're locking users in. The reason I used it was for page loading speed optimization (SSR) and didn't find any better alternatives that use React at the time. I use Next for my landing page https://audiodiary.ai and for its admin panel.
> $:{} for example is not explicit, I'd much rather use a slightly longer token (like useEffect) to help document what's going on
FWIW you're describing a legacy syntax — modern Svelte has explicit effects, and addresses your other concerns. I think you'll find it's much more to your liking https://svelte.dev/docs/svelte/$effect
100%. Used it last year for the first time in a long time and was surprised by how awful the experience was. Docs were vague and hard to navigate. My web application seemed slow by default. We also had a hell of a time trying to deploy it with Docker to AWS using the sample Dockerfiles provided by Vercel (not sure if this is still the case).
Why are you going all in on world models instead of basing everything on top of a 3D engine that could be manipulated / rendered with separate models? If a world model was truly managing to model a manifold of a 3D scene, it should be pretty easy to extract a mesh or SDF from it and drop that into an engine where you could then impose more concrete rules or sanity check the output of the model. Then you could actually model player movement inside of the 3D engine instead of trying to train the world model to accept any kind of player input you might want to do now or in the future.
Additionally, curious about what exactly the difference between the new mode of storytelling you’re describing and something like a crpg or visual novel is - is your hope that you can just bake absolutely everything into the world model instead of having to implement systems for dialogue/camera controls/rendering/everything else that’s difficult about working with a 3D engine?
> Why are you going all in on world models instead of basing everything on top of a 3D engine that could be manipulated / rendered with separate models?
I absolutely think there's going to be super cool startups that accelerate film and game dev as it is today, inside existing 3D engines. Those workflows could be made much faster with generative models.
That said, our belief is that model-imagined experiences are going to become a totally new form of storytelling, and that these experiences might not be free to be as weird and whacky as they could because of heuristics or limitations in existing 3D engines. This is our focus, and why the model is video-in and video-out.
Plus, you've got the very large challenge of learning a rich, high-quality 3D representation from a very small pool of 3D data. The volume of 3D data is just so small, compared to the volumes generative models really need to begin to shine.
> Additionally, curious about what exactly the difference between the new mode of storytelling you’re describing and something like a crpg or visual novel
To be clear, we don't yet know what shape these new experiences will take. I'm hoping we can avoid an awkward initial phase where these experiences resemble traditional game mechanics too much (although we have much to learn from them), and just fast-forward to enabling totally new experiences that just aren't feasible with existing technologies and budgets. Let's see!
> is your hope that you can just bake absolutely everything into the world model instead of having to implement systems for dialogue/camera controls/rendering/everything else that’s difficult about working with a 3D engine?
Yes, exactly. The model just learns better this way (instead of breaking it down into discrete components) and I think the end experience will be weirder and more wonderful for it.
> Plus, you've got the very large challenge of learning a rich, high-quality 3D representation from a very small pool of 3D data. The volume of 3D data is just so small, compared to the volumes generative models really need to begin to shine.
Isn’t the entire aim of world models (at least, in this particular case) to learn a very high quality 3D representation from 2D video data? My point is if that you manage to train a navigable world model for a particular location, that model has managed to fit a very high quality 3D representation of that location. There’s lots of research dealing with NERFs that demonstrate how you can extract these 3D scenes as meshes once a model has managed to fit it. (NERFs are another great example of learning a high quality 3D representation from sparse 2D data.)
>That said, our belief is that model-imagined experiences are going to become a totally new form of storytelling, and that these experiences might not be free to be as weird and whacky as they could because of heuristics or limitations in existing 3D engines. This is our focus, and why the model is video-in and video-out.
There’s a lot of focus in the material on your site about the models learning physics by training on real world video - wouldn’t that imply that you’re trying to converge on a physically accurate world model? I imagine that would make weirdness and wackiness rather difficult
> To be clear, we don't yet know what shape these new experiences will take. I'm hoping we can avoid an awkward initial phase where these experiences resemble traditional game mechanics too much (although we have much to learn from them), and just fast-forward to enabling totally new experiences that just aren't feasible with existing technologies and budgets. Let's see!
I see! Do you have any ideas about the kinds of experiences that you would want to see or experience personally? For me it’s hard to imagine anything that substantially deviates from navigating and interacting with a 3D engine, especially given it seems like you want your world models to converge to be physically realistic. Maybe you could prompt it to warp to another scene?
> wouldn’t that imply that you’re trying to converge on a physically accurate world model?
I'm not the CEO or associated with them at all, but yes, this is what most of these "world model" researchers are aiming for. As a researcher myself, I do not think this is the way to develop a world model and I'm fairly certain that this cannot be done through observations alone. I explain more in my response to the CEO[0]. This is a common issue is many ways that ML is experimenting, and you simply cannot rely on benchmarks to get you to AGI. Scaling of parameters and data only go so far. If you're seeing slowing advancements, it is likely due to over reliance on benchmarks and under reliance on what benchmarks intend to measure. But this is a much longer conversation (I think I made a long comment about it recently, I can dig up).
It’s essentially this paper but applied to a bunch of video recordings of a bunch of different real world locations instead of counter strike maps. Each channel is just changing the location.