> In the right orbit, a solar panel can be up to 8 times more productive than on earth, and produce power nearly continuously, reducing the need for batteries.
Sure. Now do cooling. That this isn't in the "key challenges" section makes this pretty non-serious.
Cooling area seems similar to generation area, so maybe less than a key challenge?
GPT says 1000 W at 50 C takes about 3 m^2 to radiate (edge on to Earth and Sun), and generating that 1000 W takes about... 3 m^2 of solar panel. The panel needs its backside radiator clear to keep itself coolish (~100 C), so it does need to be a separate surface. Spreading a 1000 W point source across a 3 m^2 tile (or half that if two-sided?) is perhaps not scary, even with weight constraints?
Hmm, from an order-of-magnitude perspective, it looks like an (L shaped) Starlink v2 sat has 100 m^2 of panel, low 10 kW draw, and a low 100 m^2 body area. And there are 10 k of them. So want something bigger. A 100 x 100 m sheet might get you 10 sats per 100,000 GPU data center.
Regards ISS, ISS has its big self, basking in the sunlight, needing to be cooled. Versus "the only thing sun-lit is panel".
This is absolutely the first thing I looked for too. They just barely mentioned thermal management at all. Maybe they know something I don't, but I know from past posts here that many people share this concern. Very strange that they didn't go there, or maybe they didn't go there because they have no solution and this is just greenwashing for the costs of AI.
No, they just literally assumed their design fits withing the operational envelope of a conventional satellite - the paper (which no one read, apparently) literally says their system design "assumes a relatively conventional, discrete compute payload, satellite bus, thermal radiator, and solar panel designs".
This is not the 1960s. Today, if you have an idea for doing something in space, you can start by scoping out the details of your mission plan and payload requirements, and then see if you can solve it with parts off a catalogue.
(Of course there's million issues that will crop up when actually designing and building the spacecraft, but that's too low level for this kind of paper, which just notes that (the authors believe) the platform requirements fall close enough to existing systems to not be worth belaboring.)
How much are you ready to bet against Elon's plans to scale up Starlink v3 for GPUs? Starlink v3 already has a 60M length solar array, so they're already solving dissipation for that size. Assume linear scaling to many thousands of modules.
"Starship could deliver 100GW/year to high Earth orbit within 4 to 5 years if we can solve the other parts of the equation. 100TW/year is possible from a lunar base producing solar-powered AI satellites locally and accelerating them to escape velocity with a mass driver."
Point solar panels away from the Sun and they work as rudimentary radiators :).
More seriously though, the paper itself touches on cooling and radiators. Not much, but that's reasonable - cooling isn't rocket science :), it's a solved problem. Talking about it here makes as much sense as taking about basic attitude control. Cooling the satellite and pointing it in the right direction are solved problems. They're important to detail in full system design, but not interesting enough for a paper that's about "data centers, but in space!".
Cooling at this scale in space is very much not a solved problem. Some individual datacenter racks use more power than the entire ISS cooling system can handle.
It's solved on Earth because we have relatively easy (and relatively scalable) ways of getting rid of it - ventilation and water.
No, I meant in space. This is a solved engineering problem for this kind of missions. Whether they can make it work within the power and budget constraints is the actual challenge, but that's economics. No new tech is needed.
No, but building bridges is a good example - it's also a solved problem. Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are. The problem to solve here isn't whether this can be done, but which off-the-shelf parts to use to make a design that you can afford.
We're past the point of every satellite being a custom R&D job resulting in an entirely bespoke design. We're even moving past the point where you need to haggle about every gram; launch costs have dropped a lot, giving more options to trade mass against other parameters, like more effective heat rejection :).
But I think the first and most important point for this entire discussion thread is: there is a paper - an actual PDF - linked in the article, in a sidebar to the right, which seemingly nobody read. It would be useful to do that.
> Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are.
Now ask them to do the Australia / Los Angeles one.
We do not have a solution for getting rid of megawatts or gigawatts of heat in space.
What the sibling comment is pointing out is that you cannot simply scale up any and every technology to any problem scale. If you want to get rid of megawatts of heat with our current technology, you need to ship up several tons of radiators and then build massive kilometer-scale radiation panels. The only way to dump heat in space is to let a hot object radiate infrared light into the void. This is an incredibly slow and inefficient process, which is directly controlled by the surface area of your radiator.
The amount of radiators you need for a scheme like this is entirely out of the question.
They literally have a solution, it's a trivial one and described in the paper. I'll try to paraphrase the whole thing, because apparently no one read it.
1. Take existing satellite designs like Starlink, which obviously manage to utilize certain amount of power successfully, meaning they solved both collection and heat rejection.
2. Pick one, swap out its payload for however many TPUs it can power instead. Since TPUs aren't an energy source, the solar/thermal calculation does not change. Let X be the compute this gives you.
3. Observe that thermal design of a satellite is independent from whether you launch 1 or 10000 of them. Per point 2, thermals for one satellite are already solved, therefore this problem is boring and not worth further mention. Instead, go find some X that's enough to give a useful unit of scaling for compute.
4. Play with some wacky ideas about formations to improve parameters like bandwidth, while considering payload-specific issues like radiation hardening, NONE OF WHICH HAVE ANY IMPACT ON THERMALS[0]. This is the interesting part. Publish it as a paper.
5. Have someone make a press release about the paper. A common mistake.
6. Watch everyone get hung up on the press release and not bother clicking through to the actual paper.
Cooling is conspicuously absent other than a brief mention in the conclusion. As if it has been redacted, because it’s such an obvious and hard problem in space. Which leads me to believe they’ve made progress and aren’t sharing that for competitive reasons. There’s an extremely strong incentive for SpaceX to put GPU on board their birds for local SDR processing power, for applications like SIGINT, high channel counts, etc, and the cooling is literally the only impediment.
In fact everything in this paper is already solved by SpaceX except GPU cooling.
> Cooling is conspicuously absent other than a brief mention in the conclusion.
It's not absent - it's covered in the paper, which this blog release summarizes. There's a link to the paper itself in the side bar.
> In fact everything in this paper is already solved by SpaceX except GPU cooling.
Cooling is already solved by SpaceX too, since this paper basically starts with the idea of swapping out whatever payload is on Starlink with power-equivalent in TPUs, and then goes from there.
I'm completely puzzled on why space-based compute is so exciting to everyone all of a sudden. I have worked on spacecraft and the constant power benefit seems comically far from outweighing the many, many negatives, even if launch cost is zero, which we are still very far from.
Am I missing something? Feels like an extremely strong indicator that we're in some level of AI bubble because it just doesn't make any sense at all.
More stupidity coming out of the corporate self-promoters which inhabit Google. Not surprised to see Blaise Agüera y Arcas on the paper; he did a TED talk in 2016 trumpeting a bunch of Google image generation research which, as far as I can tell, was not related to him at all. No related published research. Put the real researchers in the TED talks, please.
It will require a number of innovations just to solve the formation flying aspect of the system, not to mention the other challenges (listed and not)... good luck with that.
Right, but they're flying them close on purpose - point is, at first glance it looks feasible and the close formation aspect has enough benefits that it's worth exploring further. For me, it's the first time I saw the idea to exploit constellations for benefit within the system (here, communication between satellites), and not externally (synthetic aperture telescopes/beaming, or just more = lower orbit = cheaper).
This is dual-use technology for the weapon systems needed for Golden Dome. Engineers should be wary when they're getting asked to work on things that don't make economic sense.
The ultimate "out of sight out of mind" solution to a problem?
I'm surprised that Google has drunken the "Datacenters IN SPACE!!!1!!" kool-aid. Honestly I expected more.
It's so easy to poke a hole in these systems that it's comical. Answer just one question: How/why is this better than an enormous solar-powered datacenter in someplace like the middle of the Mojave Desert?
Think to any near-future spacecraft, or idea for spaceships cruising between Earth and the Moon or Mars, that aren't single use. What are (will be) such spacecraft? Basically data centers with some rockets glued to the floor.
It's probably not why they're interested in it, but I'd like to imagine someone with a vision for the next couple decades realized that their company already has data centers and powering them as their core competency, and all they're missing is some space experience...
From the post they claim 8 times more solar energy and no need for batteries because they are continuously in the sun. Presumably at some scale and some cost/kg to orbit this starts to pencil out?
If it can be all mostly solid-state, then it's low-maintenace. Also design it to burn up before MTTF, like all cool space kids do these days. Not gonna be worse at Starlink unless this gets massively scaled up, which it's meant to be (ecological footprint left as an exercise to the reader).
> Fundamentally, it is, just in the form of a swarm. With added challenges!
Right, in the same sense that existing Starlink constellation is a Death Star.
This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).
(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)
> Right, in the same sense that existing Starlink constellation is a Death Star.
"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."
This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
> The example they gave uses 81 satellites…
Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.
> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.
> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.
The more compute you do, the more heat you generate.
> Data center is made of multiplies of some compute units.
And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
I think the atmosphere absorbs something like 25% of energy. If that's correct, you get a free 33% increase in compute by putting more compute behind a solar power in LEO
And you can pretty much choose how long you want your day to be (within limits). The ISS has a sunrise every 90 minutes. A ~45 minute night is obviously much easier to bridge with batteries than the ~12 hours of night in the surface. And if you spend a bunch more fuel on getting into a better orbit you even get perpetual sunlight, again more than doubling your energy output (and thermal challenges)
I have my doubts that it's worth it with current or near future launch costs. But at least it's more realistic than putting solar arrays in orbit and beaming the power down
> In the right orbit, a solar panel can be up to 8 times more productive than on earth, and produce power nearly continuously, reducing the need for batteries.
Sure. Now do cooling. That this isn't in the "key challenges" section makes this pretty non-serious.
A surprising amount of the ISS is dedicated to this, and they aren't running a GPU farm. https://en.wikipedia.org/wiki/External_Active_Thermal_Contro...
Barely mentioning thermal management seems at odds with the X principle of "Don’t use up all your resources on the easy stuff": https://blog.x.company/tackle-the-monkey-first-90fd6223e04d
Cooling area seems similar to generation area, so maybe less than a key challenge?
GPT says 1000 W at 50 C takes about 3 m^2 to radiate (edge on to Earth and Sun), and generating that 1000 W takes about... 3 m^2 of solar panel. The panel needs its backside radiator clear to keep itself coolish (~100 C), so it does need to be a separate surface. Spreading a 1000 W point source across a 3 m^2 tile (or half that if two-sided?) is perhaps not scary, even with weight constraints?
Hmm, from an order-of-magnitude perspective, it looks like an (L shaped) Starlink v2 sat has 100 m^2 of panel, low 10 kW draw, and a low 100 m^2 body area. And there are 10 k of them. So want something bigger. A 100 x 100 m sheet might get you 10 sats per 100,000 GPU data center.
Regards ISS, ISS has its big self, basking in the sunlight, needing to be cooled. Versus "the only thing sun-lit is panel".
This is absolutely the first thing I looked for too. They just barely mentioned thermal management at all. Maybe they know something I don't, but I know from past posts here that many people share this concern. Very strange that they didn't go there, or maybe they didn't go there because they have no solution and this is just greenwashing for the costs of AI.
No, they just literally assumed their design fits withing the operational envelope of a conventional satellite - the paper (which no one read, apparently) literally says their system design "assumes a relatively conventional, discrete compute payload, satellite bus, thermal radiator, and solar panel designs".
This is not the 1960s. Today, if you have an idea for doing something in space, you can start by scoping out the details of your mission plan and payload requirements, and then see if you can solve it with parts off a catalogue.
(Of course there's million issues that will crop up when actually designing and building the spacecraft, but that's too low level for this kind of paper, which just notes that (the authors believe) the platform requirements fall close enough to existing systems to not be worth belaboring.)
Just run your AI calculations on your favorite Cryoarithmetic Engine, no problem.
The article doesn’t even have the word “heat” in it.
The linked paper does.
How much are you ready to bet against Elon's plans to scale up Starlink v3 for GPUs? Starlink v3 already has a 60M length solar array, so they're already solving dissipation for that size. Assume linear scaling to many thousands of modules.
From https://x.com/elonmusk/status/1984249048107508061:
"Simply scaling up Starlink V3 satellites, which have high speed laser links would work. SpaceX will be doing this."
From https://x.com/elonmusk/status/1984868748378157312:
"Starship could deliver 100GW/year to high Earth orbit within 4 to 5 years if we can solve the other parts of the equation. 100TW/year is possible from a lunar base producing solar-powered AI satellites locally and accelerating them to escape velocity with a mass driver."
> How much are you ready to bet against Elon's plans to scale up Starlink v3 for GPUs?
I'm sure they'll be ready right after the androids and the robotaxi and the autonomous LA-NYC summoning.
> Starlink v3 already has a 60M length solar array, so they're already solving dissipation for that size.
Starlink v3 doesn't exist yet. They're renders at this point. Full-sized v2s haven't even flown yet, just mass simulators.
https://en.wikipedia.org/wiki/Starlink#Satellite_revisions
I love your enthusiasm
Please post where you are creating the bet. You should make a lot of money from it
Betting on "Elon misses a timeline" or "Elon waters down previous plans" is tough, because no one wants to take the other side. It's guaranteed.
Spacex turns the impossible into the late. Been a shareholder since 2008
You didn’t say it would be late, you said it’s impossible. Setup the bet sir
that's easy - just put everything right behind the solar panels /s
Point solar panels away from the Sun and they work as rudimentary radiators :).
More seriously though, the paper itself touches on cooling and radiators. Not much, but that's reasonable - cooling isn't rocket science :), it's a solved problem. Talking about it here makes as much sense as taking about basic attitude control. Cooling the satellite and pointing it in the right direction are solved problems. They're important to detail in full system design, but not interesting enough for a paper that's about "data centers, but in space!".
Cooling at this scale in space is very much not a solved problem. Some individual datacenter racks use more power than the entire ISS cooling system can handle.
It's solved on Earth because we have relatively easy (and relatively scalable) ways of getting rid of it - ventilation and water.
No, I meant in space. This is a solved engineering problem for this kind of missions. Whether they can make it work within the power and budget constraints is the actual challenge, but that's economics. No new tech is needed.
> No new tech is needed.
Sure, in the same sense that I could build a bridge from Australia to Los Angeles with "no new tech". All I have to do is find enough dirt!
No, but building bridges is a good example - it's also a solved problem. Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are. The problem to solve here isn't whether this can be done, but which off-the-shelf parts to use to make a design that you can afford.
We're past the point of every satellite being a custom R&D job resulting in an entirely bespoke design. We're even moving past the point where you need to haggle about every gram; launch costs have dropped a lot, giving more options to trade mass against other parameters, like more effective heat rejection :).
But I think the first and most important point for this entire discussion thread is: there is a paper - an actual PDF - linked in the article, in a sidebar to the right, which seemingly nobody read. It would be useful to do that.
> Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are.
Now ask them to do the Australia / Los Angeles one.
"lol no"
The where and the scale matter.
Where: Low Earth Orbit.
Scale: Lots of small satellites.
I.e. done to death and boring. Number of spacecraft does not affect the heat management of individual spacecraft.
Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.
> Where: Low Earth Orbit.
Challenging!
> Scale: Lots of small satellites.
So we're getting cheaper by ditching economies of scale?
There's a reason datacenters are ever-larger giant warehouses.
> Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.
But there are places you don't build bridges. Because it's impractical.
I humbly request 'dang to strike "read the damn article" off the list of guideline violations.
It's solved for low power cooling.
We do not have a solution for getting rid of megawatts or gigawatts of heat in space.
What the sibling comment is pointing out is that you cannot simply scale up any and every technology to any problem scale. If you want to get rid of megawatts of heat with our current technology, you need to ship up several tons of radiators and then build massive kilometer-scale radiation panels. The only way to dump heat in space is to let a hot object radiate infrared light into the void. This is an incredibly slow and inefficient process, which is directly controlled by the surface area of your radiator.
The amount of radiators you need for a scheme like this is entirely out of the question.
They literally have a solution, it's a trivial one and described in the paper. I'll try to paraphrase the whole thing, because apparently no one read it.
1. Take existing satellite designs like Starlink, which obviously manage to utilize certain amount of power successfully, meaning they solved both collection and heat rejection.
2. Pick one, swap out its payload for however many TPUs it can power instead. Since TPUs aren't an energy source, the solar/thermal calculation does not change. Let X be the compute this gives you.
3. Observe that thermal design of a satellite is independent from whether you launch 1 or 10000 of them. Per point 2, thermals for one satellite are already solved, therefore this problem is boring and not worth further mention. Instead, go find some X that's enough to give a useful unit of scaling for compute.
4. Play with some wacky ideas about formations to improve parameters like bandwidth, while considering payload-specific issues like radiation hardening, NONE OF WHICH HAVE ANY IMPACT ON THERMALS[0]. This is the interesting part. Publish it as a paper.
5. Have someone make a press release about the paper. A common mistake.
6. Watch everyone get hung up on the press release and not bother clicking through to the actual paper.
--
[0] - Well, some do. Note that fact in the paper.
Cooling is conspicuously absent other than a brief mention in the conclusion. As if it has been redacted, because it’s such an obvious and hard problem in space. Which leads me to believe they’ve made progress and aren’t sharing that for competitive reasons. There’s an extremely strong incentive for SpaceX to put GPU on board their birds for local SDR processing power, for applications like SIGINT, high channel counts, etc, and the cooling is literally the only impediment.
In fact everything in this paper is already solved by SpaceX except GPU cooling.
> Cooling is conspicuously absent other than a brief mention in the conclusion.
It's not absent - it's covered in the paper, which this blog release summarizes. There's a link to the paper itself in the side bar.
> In fact everything in this paper is already solved by SpaceX except GPU cooling.
Cooling is already solved by SpaceX too, since this paper basically starts with the idea of swapping out whatever payload is on Starlink with power-equivalent in TPUs, and then goes from there.
I'm completely puzzled on why space-based compute is so exciting to everyone all of a sudden. I have worked on spacecraft and the constant power benefit seems comically far from outweighing the many, many negatives, even if launch cost is zero, which we are still very far from.
Am I missing something? Feels like an extremely strong indicator that we're in some level of AI bubble because it just doesn't make any sense at all.
Since LLM results aren't trustworthy anyways, what's a few bit flips amongst friends?
More stupidity coming out of the corporate self-promoters which inhabit Google. Not surprised to see Blaise Agüera y Arcas on the paper; he did a TED talk in 2016 trumpeting a bunch of Google image generation research which, as far as I can tell, was not related to him at all. No related published research. Put the real researchers in the TED talks, please.
It will require a number of innovations just to solve the formation flying aspect of the system, not to mention the other challenges (listed and not)... good luck with that.
Right, but they're flying them close on purpose - point is, at first glance it looks feasible and the close formation aspect has enough benefits that it's worth exploring further. For me, it's the first time I saw the idea to exploit constellations for benefit within the system (here, communication between satellites), and not externally (synthetic aperture telescopes/beaming, or just more = lower orbit = cheaper).
What sort of formation are you thinking of? They’re all going to be hugging the terminator, like a big merry go round.
This is dual-use technology for the weapon systems needed for Golden Dome. Engineers should be wary when they're getting asked to work on things that don't make economic sense.
The ultimate "out of sight out of mind" solution to a problem?
I'm surprised that Google has drunken the "Datacenters IN SPACE!!!1!!" kool-aid. Honestly I expected more.
It's so easy to poke a hole in these systems that it's comical. Answer just one question: How/why is this better than an enormous solar-powered datacenter in someplace like the middle of the Mojave Desert?
Think to any near-future spacecraft, or idea for spaceships cruising between Earth and the Moon or Mars, that aren't single use. What are (will be) such spacecraft? Basically data centers with some rockets glued to the floor.
It's probably not why they're interested in it, but I'd like to imagine someone with a vision for the next couple decades realized that their company already has data centers and powering them as their core competency, and all they're missing is some space experience...
Sure, if you don't mind boiling the passengers.
Heat management is table stakes. It's important, but boring. Nothing to obsess about.
> It's important, but boring.
It gets very exciting if you don't have enough.
> Nothing to obsess about.
It's one of the primary reasons these "AI datacenters… in space!" projects are goofy.
From the post they claim 8 times more solar energy and no need for batteries because they are continuously in the sun. Presumably at some scale and some cost/kg to orbit this starts to pencil out?
You're trading an 8x smaller low-maintenance solid-state solar field for a massive probably high-maintenance liquid-based radiator field.
Can't be high maintenance if we just make it uncrewed, unserviceable and send any data center with catastrophically failed cooling to Point Nemo /s
If it can be all mostly solid-state, then it's low-maintenace. Also design it to burn up before MTTF, like all cool space kids do these days. Not gonna be worse at Starlink unless this gets massively scaled up, which it's meant to be (ecological footprint left as an exercise to the reader).
No infrastructure, no need for security, no premises, no water.
I think it's a good idea, actually.
> No infrastructure
A giant space station?
> no need for security
There will be if launch costs get low enough to make any of this feasible.
> no premises
Again… the space station?
> no water
That makes things harder, not easier.
This is not a giant space station ...
>There will be if launch costs get low enough to make any of this feasible.
I don't know what you mean by that.
> This is not a giant space station …
Fundamentally, it is, just in the form of a swarm. With added challenges!
> I don't know what you mean by that.
If you can get to space cheaply enough for an orbital AI datacenter to make financial sense, so can your security threats.
> Fundamentally, it is, just in the form of a swarm. With added challenges!
Right, in the same sense that existing Starlink constellation is a Death Star.
This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).
(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)
> Right, in the same sense that existing Starlink constellation is a Death Star.
"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."
This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
> The example they gave uses 81 satellites…
Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.
> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.
> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.
The more compute you do, the more heat you generate.
> Data center is made of multiplies of some compute units.
And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
I think the atmosphere absorbs something like 25% of energy. If that's correct, you get a free 33% increase in compute by putting more compute behind a solar power in LEO
And you can pretty much choose how long you want your day to be (within limits). The ISS has a sunrise every 90 minutes. A ~45 minute night is obviously much easier to bridge with batteries than the ~12 hours of night in the surface. And if you spend a bunch more fuel on getting into a better orbit you even get perpetual sunlight, again more than doubling your energy output (and thermal challenges)
I have my doubts that it's worth it with current or near future launch costs. But at least it's more realistic than putting solar arrays in orbit and beaming the power down
Data centers in space are guaranteed to be a thing by 2035.
https://x.com/elonmusk/status/1984868748378157312
https://x.com/elonmusk/status/1985743650064908694
https://x.com/elonmusk/status/1984249048107508061
0.5% of the starlink node network deorbits each month currently, though potentially more.
They're already having a negative, contaminating effect on our upper atmosphere
Sending up bigger ones, and more (today there's some 8,800, but they target 30k), sounds ill-advised.
1: https://www.fastcompany.com/91419515/starlink-satellites-are... 2: https://www.science.org/content/article/burned-satellites-ar...
However 10 years in Musk time is at least 30 years in real time
Had me going for a minute there.
Poe's Law strikes again!