Strategic Initiatives
12233 stories
·
45 followers

Mamdani’s East Harlem Grocery Store Boondoggle // While staying true to his socialist roots, New York City’s mayor has chosen one of the worst possible options to achieve his affordability goals.

1 Share
  • Municipal Expansion: Mayor Zohran Mamdani announced the first of five planned city-owned grocery stores, to be located at La Marqueta in East Harlem.
  • Fiscal Commitment: The project is slated to receive $30 million from the city’s capital budget, with the facility operating rent-free and tax-free while requiring union labor.
  • Statistical Dispute: The administration’s claim of a 66% rise in grocery prices is based on a misinterpretation of consumer spending data; the actual Bureau of Labor Statistics index shows a 34% increase in the New York City metro area over the same decade.
  • Market Consequences: The city-funded store faces criticism for using taxpayer resources to directly compete with local, tax-paying entrepreneurs and small businesses.
  • Alternative Solutions: Proposals for better utilizing the $30 million include upgrading existing store infrastructure, improving public transit access to current low-cost retailers, and reforming land-use policies.
  • Development Potential: Modifying the current configuration of NYCHA developments to include mixed-income housing and ground-floor retail could attract private supermarkets and generate sustainable revenue for the city.

New York City Mayor Zohran Mamdani has announced the first location for one of the five promised city-owned grocery stores. The 9,000 square-foot store will be built on a city-owned vacant site under the Metro-North railroad viaduct at La Marqueta, near Park Avenue and East 116th Street, in Manhattan’s East Harlem neighborhood.

The city will finance the store’s construction with $30 million from the capital budget. The store will have no debt service. Though privately operated, it will pay no rent or property tax.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

Mamdani’s plan proposes, in essence, that the city will compete with local grocery stores—using public subsidies to lower the cost of staple foods—and that it will do so while paying store employees union wages.

It is by no means clear that both objectives are possible without additional subsidies. Nor is this extravagant expenditure at a time of budget stringency the most effective way to achieve Mamdani’s food-affordability goals for East Harlem residents.

The mayor’s rationale for his public grocery-store venture, as stated in a recent press release, is that “[g]rocery prices in New York City have risen nearly 66% over the past decade—significantly outpacing the national average.” That’s a bogus statistic, and we can trace how the mayor’s staff made that error. The press release links to a report from New York State Comptroller Thomas DiNapoli, who indeed finds that something increased by 66 percent: New York metropolitan-area consumers’ spending on food eaten at home, from 2012–2013 to 2022–23. That statistic, which includes spending by affluent people in the suburbs who shop at premium stores, says nothing about prices.

The Bureau of Labor Statistics publishes a separate price index for the cost of food consumed at home for the New York metropolitan area. Not surprisingly, the local index tracks the national index closely (see chart below). Over the ten-year period from March 2016 to March 2026, the food-at-home price index for the New York City metro area increased by 34 percent, versus 32.5 percent for the national index.

Source: U.S. Bureau of Labor Statistics; Federal Reserve Bank of St. Louis

New Yorkers live in an expensive region. Moreover, East Harlem has long struggled to bring affordable food to its residents. Historically, the neighborhood depended on small supermarkets run by individual entrepreneurs and was shunned by big regional and national retailers. The small stores often fell short of the standards set by large grocers, with high prices and limited access to fresh fruits and vegetables.

In the 1990s, a huge political battle erupted over a proposal to construct a Pathmark supermarket on a city-owned site at East 125th Street and Third Avenue. Leading the opposition were entrepreneurs who operated small supermarkets in the neighborhood. They feared the big new store would drive them out of business.

The Pathmark opened in 1999 and became hugely popular. But it lasted only 16 years, closing in 2015 with the bankruptcy of its parent company, A&P. Today, the site is a vacant lot, and the grocers left standing are mostly the same small operators. They have benefited from the population growth of the East Harlem community district, Manhattan 11, from 117,743 in 2000 to 133,493 in 2020.

The small grocers provide walk-in convenience for East Harlem’s low-income population, but they cannot achieve the purchasing and operating economies of scale enjoyed by Pathmark in its heyday. Fortunately, East Harlem residents also have access to an Aldi supermarket in the East River Plaza shopping mall at the east end of 117th Street. Part of a larger national chain, the East Harlem Aldi offers a large selection of private-label items and advertises its low prices.

Aldi may operate successfully in East Harlem because the shopping mall’s parking garage attracts a more affluent clientele from a broader area of the city. Nonetheless, its operating model (and that of its rival, Lidl) likely provide a better answer to Mamdani’s concerns about food affordability, and at no extra cost to the city.

The city council will need to approve the proposed La Marqueta grocery lease. Council Speaker Julie Menin was noncommittal and expressed concern about the impact on local businesses. Local grocers figure to be strongly opposed, as they should be. The city should not be using taxpayer resources to undercut businesses that pay taxes and comply with other applicable laws, all to benefit one favored operator.

What should Mamdani do instead? The city could take the same $30 million and use it to help local entrepreneurs upgrade their stores—for example, with energy-efficient equipment. Such aid could be conditioned on competitive pricing of staples, though the city may lack the capacity to monitor these agreements effectively. The city could also upgrade bus service along East 116th Street, making it easier for residents without cars to reach stores like Aldi.

More broadly, the city should reconsider the land-use patterns of East Harlem, which limit access to services widely available elsewhere in Manhattan. A band of aging New York City Housing Authority (NYCHA) buildings stretches across Harlem and East Harlem between East 112th and East 115th Streets, with additional large NYCHA projects extending for blocks to the north and south in Community District 11. The result is a vast, government-engineered concentration of poverty.

In keeping with the benighted planning theories of the time, those NYCHA projects have no retail stores within them. They do have large open spaces suitable for new construction and the potential for selective demolition and tenant relocation within the developments. By allowing new, higher-income housing within NYCHA developments while retaining existing tenants, the city could attract new supermarkets that will not only upgrade the local retail environment for all residents but also yield rent-paying revenue in ground-floor spaces that helps underwrite the costs of new housing.

True to his socialist principles, Mamdani has chosen one of the worst possible options to achieve his goals. He should be more pragmatic, and the city council should help him get there.

Eric Kober is a senior fellow at the Manhattan Institute.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The electromechanical angle computer inside the B-52 bomber's star tracker

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Celestial Navigation Necessity: archaic manual techniques were prone to human error and inefficiency despite claims of being jam proof and infrastructure independent.
  • Automated Complexity: the 1960s b 52 system relied on an absurdly intricate electromechanical apparatus termed the angle computer to force trig calculations into hardware.
  • Analog Delusion: the system pretended to mirror the celestial sphere physically rather than calculating positions through efficient digital computation which was deemed too expensive or unreliable at the time.
  • Mechanical Clutter: nineteen distinct components and layers of amplifiers were required just to track a star, highlighting the extreme overhead of maintaining such a fragile antique.
  • Interface Obsolescence: the data entry process was painfully manual, relying on knobs and analog dials as if rotating a radio dial would somehow equate to modern precision navigation.
  • Mathematical Rigidity: relying on published paper almanacs for stellar coordinates ensured that navigators were perpetually dependent on static data for a dynamic and constantly changing universe.
  • Geometric Arbitrariness: the reliance on the first point of aries and other historical celestial markers serves as a reminder of how humans impose flawed imaginary grids onto the heavens to avoid getting lost.
  • Transient Technology: the angle computer was a fleeting, desperate attempt to bridge the gap between physical clockwork mechanisms and the nascent electronic era before being rightfully replaced by superior digital systems.

Before GPS, how did aircraft navigate? One important technique was celestial navigation: navigating from the positions of the stars, planets, or the sun. While celestial navigation is accurate, cannot be jammed, and doesn't require any broadcast infrastructure, it is a difficult and time-consuming process to perform manually. In the early 1960s, an automated system was developed for the B-52 bomber to automatically track stars and compute navigation information. Digital computers weren't suitable at the time, so the star tracking system performed trigonometric calculations with an electromechanical analog computer called the Angle Computer.1

The Angle Computer contains complex electromechanical systems. Click this image (or any other) for a larger image.

The Angle Computer contains complex electromechanical systems. Click this image (or any other) for a larger image.

The photo above shows the mechanism inside the Angle Computer.2 Although it may look like a gyroscope or IMU (Inertial Measurement Unit), it is completely different and nothing is spinning. The Angle Computer physically models the "celestial sphere", with a complicated mechanism inside that moves a pointer that represents the position of a star. The corresponding angles (the azimuth and altitude) are read out electrically through devices called synchros, providing information to the navigation system through bundles of wires. In this article, I'll give an overview of how celestial navigation works and explain how the Angle Computer performs its calculations.

The Astro Compass system

The Angle Computer is one piece of the Astro Compass, a system that locked onto a star and produced a highly accurate heading (i.e., compass direction), accurate to a tenth of a degree. While the heading is the main output from the Astro Compass, the navigator can also use it to determine position, using the "lines of position" technique described later.

The Astro Tracker was mounted on top of the aircraft with the plastic bubble sticking out.

The Astro Tracker was mounted on top of the aircraft with the plastic bubble sticking out.

The Astro Compass navigation system was built around the "Astro Tracker" (above), the optical system that tracks a star. The Astro Tracker was mounted on the aircraft with the 4-inch glass dome protruding from the top of the fuselage. This unit contains a tracking telescope, which used a photomultiplier tube to detect the light from a star. A gyroscope and a complicated system of motors provided a "stable platform", keeping the telescope precisely vertical even as the aircraft tilted and moved. A prism rotated and tilted to aim the telescope at a particular star.3

Star tracker instruments in the B-52 navigator's instrument panel: Line of Position display, Master Control panel, Heading Display panel, and Indicator Display panel.  From Kollsman MD-1 Automatic Astro Compass Manual.

Star tracker instruments in the B-52 navigator's instrument panel: Line of Position display, Master Control panel, Heading Display panel, and Indicator Display panel. From Kollsman MD-1 Automatic Astro Compass Manual.

The Astro Compass system is bewilderingly complicated, consisting of 19 components (above) to support the Astro Tracker.4 On the right are the ten amplifier and computer components that controlled the system; the Angle Computer is in the lower right. On the left are the nine control and indicator panels that were used by the B-52's navigator. The photo below shows four of these panels in use in a B-52 in 1972.

The navigator's station in a B-52. Some of the Astro Compass controls are indicated with arrows: the Line of Position display and the Master Control on the left, and the Heading display and Indicator display to the right. The navigator in this photo is Carl Hanson-Carnethon. From Rob Bogash's B-52 photo album. This specific B-52 (#2584) is now at The Museum of Flight, Seattle, but the Astro Compass is no longer present.

The navigator's station in a B-52. Some of the Astro Compass controls are indicated with arrows: the Line of Position display and the Master Control on the left, and the Heading display and Indicator display to the right. The navigator in this photo is Carl Hanson-Carnethon. From Rob Bogash's B-52 photo album. This specific B-52 (#2584) is now at The Museum of Flight, Seattle, but the Astro Compass is no longer present.

Controlling the Astro Compass

The Astro Compass has an interesting user interface, letting you input one value at a time by rotating a knob. First, you use the Master Control Panel to select a data value such as the clock time, SHA (Sidereal Hour Angle) for star #1, or Declination for star #3. Then you turn the "Set Control" knob clockwise or counterclockwise to scroll through the data values until the proper value is reached. Each knob on the Master Control Panel has a different geometrical shape, allowing the user to distinguish the knobs by feel. The Master Control Panel is visible in the lower left corner of the photo above, within easy reach of the navigator.

The Master Control Panel is the main interface to the Astro Compass.

The Master Control Panel is the main interface to the Astro Compass.

Each data value has a separate electromechanical display. The photo below shows a Star Data display, indicating the sidereal hour angle and the declination for a star. I removed the cover so you can see how the digital display actually consists of analog dials rotated by motors under synchro control. The system has three Star Data displays, so it can hold the positions of three stars at a time. Getting fixes from three different stars is useful when computing lines of position. The system uses one star at a time, but you can quickly change stars by flipping the Star switch on the Master Control Panel.

A Star Data display with the cover removed.

A Star Data display with the cover removed.

But how did the navigator obtain the information to put into the Astro Compass, since the sun, moon, stars, and planets are in constant motion?5 The necessary celestial information is published in a book called the Air Almanac. The US Government started publishing the Air Almanac in 1941, issuing a new volume every four months. The Almanac had a sheet for each day, providing celestial data on 10-minute intervals. The first column has the time (GMT, Greenwich Mean Time)6 while the other columns give the position of the sun, an important value called the First Point of Aries (symbol ♈︎), the positions of the visible planets, and the position of the moon. A separate table and chart provided the locations of stars; the stars don't have daily updates since they are almost stationary.7 (The Air Almanac is now online; you can download the 2026 Air Almanac here.)

An excerpt from the 1960 Air Almanac. Photo used with permission from tanasa2022, who is selling the Almanac on eBay.

An excerpt from the 1960 Air Almanac. Photo used with permission from tanasa2022, who is selling the Almanac on eBay.

The navigational triangle: Computing a star's position

The Air Almanac provides star coordinates in a global coordinate system, but the Astro Compass needed to know star coordinates in the aircraft's local coordinate system. Determining the star's position requires changing the coordinate system by using spherical trigonometry and something called the navigational triangle. There's a fair bit of terminology involved, which I'll explain in this section.

The Astro Tracker, like many telescopes, is aimed by using azimuth and altitude. Suppose you go into your yard, point at the horizon, and turn 360° in a circle; the direction you're pointing is called the azimuth. The point directly overhead is called the zenith. Now swing your arm upwards 90° from the horizon to the zenith. That angle is called the altitude. (Confusingly, the term "altitude" is used both for the angle of a star and the height of an aircraft.) Thus, if you point at a particular star, you can describe its position with two angles: your horizontal rotation from north gives the azimuth, and the angle up from the horizon gives the altitude.8 This system is called the horizontal coordinate system, as it is based on the horizon. (The word "horizontal" comes from "horizon", by the way.) This is a local coordinate system since other locations will have a different azimuth and altitude for the star. The azimuth and altitude constantly vary with time because the Earth's rotation makes the star appear to move.

The equations for the altitude and azimuth are complicated, with sines, cosines, arcsine, and arctangent. To see why the equations are complicated, consider a time-exposure photo of star trails. As the Earth rotates, each star forms a circle around Polaris, the North Star. To trace out this circular path, the altitude and azimuth vary in a trigonometric way. This computation is performed electromechanically by the Angle Computer, as will be explained later.

Kitt Peak National Observatory beneath star trail. Credit: DESI Collaboration/DOE/KPNO/NOIRLab/NSF/AURA/L. Tyas, CC BY 4.0.

Kitt Peak National Observatory beneath star trail. Credit: DESI Collaboration/DOE/KPNO/NOIRLab/NSF/AURA/L. Tyas, CC BY 4.0.

Now let's switch to how the position of a star is defined in the Air Almanac (for example), independently of your local position. Pretend that the stars are on the surface of a large sphere that surrounds the Earth, called the celestial sphere. The stars are stationary on the surface of the celestial sphere, while the Earth rotates once a (sidereal)9 day in the middle. Thus, as you look up at the celestial sphere, you see the stars moving. You can extend the Earth's equator out to the celestial sphere, defining the celestial equator. Likewise, the celestial sphere has celestial poles, matching the Earth's poles. On the Earth, you specify a location (such as the airplane's location) with latitude and longitude (red). Latitude is measured from the equator, and longitude is measured from a fixed meridian (orange). The 0° meridian is arbitrarily defined to pass through Greenwich (England, not Connecticut). Similarly, the position of a star is specified by the angle from the celestial equator (called declination instead of latitude) and the angle from the meridian (called the sidereal hour angle or SHA instead of longitude).10

The celestial sphere, with the Earth at the center. The position of a star is described by Sidereal Hour Angle and declination, analogous to longitude and latitude describing the position of, say, an airplane on the Earth. The diagram is based on patent 2998529, "Automatic astrocompass".

The celestial sphere, with the Earth at the center. The position of a star is described by Sidereal Hour Angle and declination, analogous to longitude and latitude describing the position of, say, an airplane on the Earth. The diagram is based on patent 2998529, "Automatic astrocompass".

But what meridian is the starting point—0°—when measuring a star's Sidereal Hour Angle? The celestial equator matches the Earth's equator, but this won't work for the Greenwich meridian because it is constantly in motion. Instead, the 0° celestial meridian is arbitrarily defined as the position where the sun crosses the equator at the vernal equinox (the start of spring). If you consider the position of the sun on the celestial sphere, the sun will travel around the sphere once a year. Because the Earth's axis is tilted, the sun will be above the equator half the year and below the equator half the year, crossing the equator at the vernal equinox (March) and the autumnal equinox (September).

This reference point on the celestial sphere is called the First Point of Aries, represented by the symbol ♈︎ (horns of a ram); you might remember this symbol from the Air Almanac. At this point, the sun is in the constellation Pisces. So why is this point called the First Point of Aries and not Pisces? Back in 130 BCE, the ancient Greek astronomer Hipparchus defined the First Point of Aries as the starting point for the sun's motion. In that distant era, the sun was in the constellation Aries at the equinox, not in Pisces as it is today. It turns out that the direction of the Earth's axis isn't fixed, but moves in a 26,000-year cycle called the precession of the equinoxes.11 A 26,000-year cycle may seem irrelevant, but it's fast enough that the sun has moved from Aries to Pisces since Hipparchus's time. (And the equinox has moved 1° more since the B-52 was first produced!)

(All this talk of Aries and Pisces may sound like astrology, and, yes, there is a direct connection. Aries is the first zodiac sign, starting at the vernal equinox, typically March 21. The equinox's precession is "backwards", so the equinox has moved to Pisces, the last zodiac sign. Astronomically, the equinox will move into the constellation Aquarius around 2600 CE, but astrologers disagree on whether the Age of Aquarius has started; perhaps the 1960s was the dawning of the Age of Aquarius.)

How do you convert the star's fixed coordinate to the Earth's rotating coordinate? First, you look up the angle between the Greenwich meridian and the celestial meridian of Aries at a particular time. This angle (purple) is called the Greenwich Hour Angle of Aries (GHA ♈︎). Next, you look up the star's Sidereal Hour Angle (SHA). Adding them gives you the star's Greenwich Hour Angle (red), the angle between the Greenwich meridian and the star. Subtracting the aircraft's longitude gives you the Local Hour Angle (LHA, not shown), the angle between the aircraft's meridian and the star. (Note that these steps are simply addition and subtraction, so a mechanical system can easily do them with differential gears.)

Computing the Greenwich Hour Angle of the start on the sphere.

Computing the Greenwich Hour Angle of the start on the sphere.

The final step, obtaining the azimuth and altitude, requires tricky spherical trigonometry. The yellow triangle is the navigational triangle, a spherical triangle on the surface of the celestial sphere. The upper vertex is the North Pole, the red vertex is the airplane's zenith (i.e., directly above the airplane), and the final vertex is the star. Two sides of the triangle and an angle (purple) are known, so the remaining angles and sides can be solved with spherical trigonometry. Specifically, the first side (purple) is 90°-declination, the second side is 90°-latitude,12 and the angle between is the LHA (Local Hour Angle). Solving for the angle at the zenith gives the azimuth (blue), while solving for the third side gives 90°-altitude (green, the angle down from the zenith to the star).

By solving the navigational triangle, the altitude and azimuth can be obtained.

By solving the navigational triangle, the altitude and azimuth can be obtained.

Thus, the key problem is solving the navigational triangle. Navigators could solve the navigational triangle by looking up angles in a thick book of "sight reduction" tables and performing some math. But how could the process be automated? That was the purpose of the Angle Computer.

The Angle Computer

The job of the Angle Computer was to solve the navigational triangle mechanically. Its inputs were the star's declination, altitude, and local hour angle. From these, it computed the star's altitude and azimuth at the aircraft's current position.13

The concept behind the Angle Computer is that it physically modeled the celestial sphere with a half-sphere, 2 5/8" in radius. A star pointer was mechanically positioned on the surface of this sphere, using the star's declination and local hour angle, adjusted by the latitude of the viewer. The star pointer moved a readout mechanism that translated the star's position into the azimuth and altitude at the specified location. Thus, the Angle Computer mechanically converted between the coordinate systems by using a physical representation, solving the navigational triangle.

The diagram below shows how the star pointer is positioned on the two-dimensional surface of the sphere, using a complicated mechanism inside the sphere. The U-shaped declination arm swings up and down, corresponding to the star's declination (angle above the celestial equator). Meanwhile, the declination arm constantly rotates around the polar axis, as specified by the LHA (Local Hour Angle). In one (sidereal) day, the mechanism will make a full cycle, corresponding to the Earth's spin. Finally, the latitude arm moves the mechanism up or down, corresponding to the viewer's latitude. On the right, three gears provide the inputs for latitude, LHA, and declination.

The input mechanism for the Angle Computer. The photo has been rotated 90° to better match the
Earth's rotation. Rotation around the polar axis corresponds to the Earth's daily rotation. Note that the star pointer will hit the end of the semicircular azimuth arc at some point; this corresponds to the star moving to the horizon and setting.

The input mechanism for the Angle Computer. The photo has been rotated 90° to better match the Earth's rotation. Rotation around the polar axis corresponds to the Earth's daily rotation. Note that the star pointer will hit the end of the semicircular azimuth arc at some point; this corresponds to the star moving to the horizon and setting.

A separate mechanism provides the altitude and azimuth outputs, driven by the star pointer. The key is the semicircular azimuth arc, which represents the arc from the viewer's horizon to the zenith, oriented to a particular azimuth. The star pointer is attached to the azimuth arc through a slider, so as the star pointer moves, it moves the slider along the azimuth arc and also rotates the azimuth arc. Specifically, the azimuth arc represents the line from the horizon to the zenith at a particular azimuth. The position of the slider on the azimuth arc corresponds to the altitude, from 0° at the horizon to 90° at the zenith.14. The azimuth arc rotates around the zenith point, which is at the back of the azimuth arc; this rotation indicates the azimuth value. As the azimuth arc rotates, it turns a gear at the zenith, providing the azimuth output. The slider arc has teeth on it; as the slider moves, these teeth rotate a second gear, providing the altitude output.

The output mechanism for the Angle Computer. The mechanism is in a different position from the
previous diagram. In particular, the latitude arm has been raised to a near-polar latitude and the photograph is from
the other side of the latitude arm. At this latitude, the polar axis is almost lined up with the zenith. As the LHA changes, the star will move in a circle, rotating the azimuth arc but causing little change in altitude. This corresponds to the real world situation of stars moving in a cirle around the zenith, if you're near the pole.

The output mechanism for the Angle Computer. The mechanism is in a different position from the previous diagram. In particular, the latitude arm has been raised to a near-polar latitude and the photograph is from the other side of the latitude arm. At this latitude, the polar axis is almost lined up with the zenith. As the LHA changes, the star will move in a circle, rotating the azimuth arc but causing little change in altitude. This corresponds to the real world situation of stars moving in a cirle around the zenith, if you're near the pole.

From the back, the numerous synchro transmitters, synchro control transformers, and motors are visible. Even though the computation itself is mechanical, the Angle Computer has numerous electrical components. In the top half, the synchro transmitters provide electrical outputs of the azimuth and altitude. (A synchro transmitter uses fixed and moving coils to convert a shaft rotation angle into a three-wire electrical signal.) The large gear provides the altitude output. In the lower half, the longer cylinders are motors that move the Angle Computer's mechanisms. The motors are directed to rotate to a particular position through a feedback loop: synchro control transformers provide feedback to the external servo amplifiers that power the motors.

The back of the Angle Computer.

The back of the Angle Computer.

Partially disassembling the Angle Computer shows the complex gear trains inside, linking the synchros, motors, and the physical mechanism. The squat brass-colored units in the lower center are differential assemblies to add or subtract signals.15 One of the drive motors, a long cylinder, is visible in the lower right.

Gear trains inside the Angle Computer.

Gear trains inside the Angle Computer.

The Line of Position

Although the heading was the primary output from the Astro Compass, the Astro Compass could also help determine the location of the aircraft, using a technique called the celestial line of position. This technique was discovered in 1837 and became heavily used for navigating ships with a sextant. It could also be used onboard an aircraft.

To understand the line of position, suppose you go outside and find a star directly overhead. If you measure the altitude—the angle from the horizon to the star—with a sextant, the angle will be 90°, since it is overhead. Now, suppose you teleport 60 nautical miles away in any direction. The sextant will now show an altitude of 89° to the star, since a nautical mile is conveniently defined to match one minute of angle (one-sixtieth of a degree). Alternatively, if you measure an altitude of 89° to the star, you know you are 60 miles away from the original point under the star (called the sub-stellar point). Likewise, if you measure 88° to the star, you're on a circle with radius of 120 nautical miles around the sub-stellar point. If you measure, say, an altitude of 40°, you know you're on a very large circle with radius of 3000 miles around the sub-stellar point. So how does this help with navigation?

Suppose you're on a boat in the middle of the Pacific and you have a rough idea of where you are, say within 100 miles, but you want to find your exact position. Put a dot on the map where you think you are. Next, pick a star and work out what the angle to the star should be from your position. Measure the altitude with your sextant. Suppose you expected 50° but measured 51°. You now know that you're somewhere on a circle with radius of 2340 miles around the distant sub-stellar point. This doesn't seem very useful. However, since the angle was 1° more than expected, you know that the circle is 60 miles closer to that distant point than your estimated position. Moreover, since you have some idea of where you are, you know that you're on the part of this circle near your estimated location. And since you're looking at a small part of a big circle, you can approximate it by a line. So you can go back to your map, move 60 miles closer to the star from your estimated point, and draw a perpendicular line. This is your line of position, and you know that you're on this line (more or less).

Knowing that you're on a line isn't too useful, but you can repeat the process with a star in a different part of the sky. Maybe this time the angle is 2° smaller than expected, so you can draw a line of position 120 miles further away from your estimated position, in a different direction. The two lines cross, indicating a position where you (probably) are.16 Normally, you repeat the process with a third star, giving you three lines of position, providing a position and an idea of its accuracy.

The Line of Position display panel. Remember that the altitude here has nothing to do with the aircraft's altitude. From Kollsman MD-1 Automatic Astro Compass Manual.

The Line of Position display panel. Remember that the altitude here has nothing to do with the aircraft's altitude. From Kollsman MD-1 Automatic Astro Compass Manual.

The Astro Compass used the display above to show the star's azimuth and the distance in miles from the assumed location to the line of position, called the Altitude Intercept. With this information, the navigator could draw a line of position on the map. The navigator repeated the process with two more stars to get a location fix.17

Conclusion

The Angle Computer is a relic from a time when a mechanical analog computer was the best way to solve a problem, but the computer was also electrical. Although a mechanical apparatus solved the navigational triangle, it was moved into position by motors, and the output was transmitted electrically through wires. Moreover, the Angle Computer was driven by electronic amplifiers and feedback circuits that used both vacuum tubes and transistors.

The designers of the Astro Compass considered multiple approaches to computing the navigational triangle (details). The first was to use small electromechanical devices called resolvers that convert a physical rotation into sine and cosine values. By combining six resolvers with amplifiers, the altitude and azimuth could be obtained. The resolver solution was rejected as being too large and requiring a precision power supply. The second approach was to use a digital computer to determine the solution. This solution was rejected because in 1963, a digital computer was expensive, slow, and less reliable. The final approach, which was adopted, was to build a mechanical, physical model of the celestial sphere. Thus, the Angle Computer resided at the uneasy intersection of physical mechanisms, electrical circuits, vacuum tubes, and solid-state electronics, soon to be obsoleted by digital computers.

I plan to write more about the Astro Compass system. For updates, follow me on Bluesky (@righto.com), Mastodon (@kenshirriff@oldbytes.space), or RSS. Thanks to Richard for supplying the Astro Compass hardware.

AI statement: I didn't use AI to write this article (details).

Notes and references

  1. The Angle computer is labeled "Computer, Altitude-Azimuth, Automatic Astro Compass Type MD-1" and also has an "MD-3" sticker. Presumably, MD-3 is an upgrade of the MD-1. The system is also known as the "Kollsman KS-50-03 Astro Tracking System" (or maybe 50-08).

    There are a few documents available on the system, including Operating Instructions Handbook, Operating Instructions Pocket Manual, a technical article The Celestial Tracker as an Astro Compass, and a patent Celestial Data Computer. The web page PRC68: Automatic Astro Compass Type MD-1 has an extensive collection of links. CuriousMarc has a YouTube series on the Astro Tracker, starting with part 1. If you want to learn more about celestial navigation, this World War II training film describes the process in detail. 

  2. From the outside, the Angle Computer is an uninteresting black cylinder with connectors on the end. The cylinder was sealed with a soldered metal band that we removed with a blowtorch. It was pressurized with dry nitrogen through the fill valve in the center, a Schrader valve just like you'd find on a tire.

    The Angle Computer is packaged in a nondescript black cylinder.

    The Angle Computer is packaged in a nondescript black cylinder.

     

  3. The Astro Compass needed to know approximately where in the sky to find the star, in order to point its sensor in the right direction. The direction didn't need to be exact because the Astro Compass performed a spiral search pattern to find the star. This search pattern covered ±4° in bearing and ±2.5° in altitude. In comparison, the Moon is 0.5° wide, so it's a fairly large target area. 

  4. The diagram below shows the physical connections of the components of the Astro Compass.

    A physical diagram of the Astro Compass. The Angle Computer is called the Alt Az Computer in this diagram. Click this image (or any other) for a larger version.

    A physical diagram of the Astro Compass. The Angle Computer is called the Alt Az Computer in this diagram. Click this image (or any other) for a larger version.

    For a slightly different perspective, the diagram below shows the flow of data in the Astro Compass.

    A block diagram of the Astro Compass. The Angle Computer is called the Altitude Azimuth Computer in this diagram. From Automatic Astro Compass, Operating Instructions Handbook

    A block diagram of the Astro Compass. The Angle Computer is called the Altitude Azimuth Computer in this diagram. From Automatic Astro Compass, Operating Instructions Handbook

     

  5. The Astro Compass normally gets the latitude and longitude from the bombing computer. It normally gets the approximate heading (called the BATH, Best Available True Heading) from the magnetic compass. These values can all be entered manually if necessary. 

  6. Greenwich Mean Time is now mostly obsolete, replaced by UTC (Coordinated Universal Time). Greenwich Mean Time is based on when the sun reaches its highest point over Greenwich, England (longitude 0°). In solar time, the sun reaches its highest point at exactly noon. Unfortunately, the Earth's orbit is elliptical, so the length of a solar day varies throughout the year, by almost a minute. Since it's nice to have a constant 24-hour day, Mean Time was introduced. The idea is to average out the length of the day throughout the year, so each day is exactly 24 hours, even though the sun is no longer overhead exactly at noon. UTC is essentially the same as GMT, but defined by atomic clocks rather than the position of the sun over Greenwich. They can vary by up to 0.9 seconds, with a leap second added to UTC to keep them in sync. 

  7. The stars are all moving in different directions, but for most stars, the visible change in position (the proper motion) is very small. However, comparing the 1960 Air Almanac with the 2026 Air Almanac shows many of the listed stars have moved a degree or more due to the precession of the equinox. The change varies from star to star, both because the angular change depends on the star's location and because the SHA is exaggerated as you get closer to the poles (details). 

  8. Note that the azimuth is discontinuous at the zenith. To see this, imagine a star passing directly overhead: point your arm at the horizon and then swing it up until it is pointing straight up. To continue, you need to instantaneously spin around 180° and then lower your arm.

    The discontinuity in azimuth is important for the Angle Tracker, since it can't instantaneously change the azimuth by 180°. To avoid this problem, the Angle Computer has cams and microswitches to keep the altitude below 85°. (Otherwise, the azimuth arc will jam up instead of rotating smoothly.) The Astro Tracker also has declination limits of +90° and -47° and a lower altitude limit of -6°. The latitude is limited to the range between -2° and +90°; the system automatically switches hemispheres so both the North and South latitudes are usable. 

  9. One annoyance is that the length of a day is slightly different if you look at the sun (a solar day) versus looking at the stars (a sidereal day). A solar day is the standard 24-hour day, where the Earth rotates once and the sun returns to its previous position (approximately). But if you look at the stars, it takes a bit less time (23 hours, 56 minutes, and 4 seconds) for the stars to return to their previous position. The problem is that during one year, the Earth swings from one side of the sun to the other side and then back to the first side. From the perspective of the stars, this is an "extra" revolution, so there are 366.25 sidereal days in a year, compared to 365.25 solar days in a year. (I.e., it's an "off-by-one" error.) This makes each sidereal day slightly shorter. You can also think of this as the sun moving around the celestial sphere once per year, with the sun's position against the stars constantly changing. 

  10. Celestial navigation usually uses the sidereal hour angle (SHA) to measure the star's position relative to the meridian. Astronomers often use the right ascension instead. The right ascension is measured in the opposite direction and is measured in hours instead of degrees. They are related by the formula RA = (360° - SHA) / 15°

  11. The Earth's axis also wobbles on a cycle of 18.6 years because the Earth isn't exactly spherical. For many purposes, this wobble is averaged out and the "mean equinox" is used. The physical equinox is called the "apparent equinox". Greenwich Mean Sidereal Time (GMST) is measured with respect to the mean equinox, while Greenwich Apparent Sidereal Time (GAST) is measured with respect to the apparent equinox. The difference between the mean equinox and the apparent equinox is called the "equation of the equinoxes". The difference between the two equinoxes is small, less than about 1.1 seconds. 

  12. The angle of 90°-declination is sometimes called co-declination, the complement of declination, i.e., the angle down from the pole. Similarly, 90°-latitude is sometimes called co-latitude.

    The triangle can be solved using the spherical law of sines and the spherical law of cosines. An alternative, which makes more sense to me, is to find the answer by applying rotation matrices to change the coordinate system. Details are here, and Wikipedia has a convenient summary. 

  13. It may seem like there is a chicken-and-egg situation with navigation since you need to know your position in order to compute the star's altitude and azimuth, and you need to know the aircraft's heading to know which direction to point the telescope. In fact, you just need to know the approximate latitude, longitude, and heading (within 4°), and then the system generates a more accurate latitude, longitude, and heading. The process can be repeated until the values converge.

    Moreover, the Astro Compass is just one of the instruments that the navigator uses. The magnetic compass can provide an approximate heading, and dead reckoning or inertial navigation can provide an approximate location. The Astro Compass can use these to generate more accurate information, which in turn can improve the accuracy of the dead reckoning or inertial navigation. 

  14. Since the azimuth arc is a semicircle (180°), it might seem that the star pointer could move 180° in altitude along the azimuth arc. This wouldn't make sense, since the altitude ranges from 0° (horizon) to 90° (zenith). The explanation is that the slider is a quarter-circle (90°). Thus, the star position can only move 90° before the other end of the slider hits the end of the azimuth arc. 

  15. The differential gears are necessary because the axes aren't mechanically independent. For instance, as the latitude arm swings up and down, it also moves the declination and LHA drive shafts, causing unwanted rotation along these axes. The differentials subtract out the latitude motion from the declination and LHA inputs, so the resulting movements on each axis are independent. 

  16. Technically, two different circles on a sphere can cross at 0, 1, or 2 points. In practice, there will be two intersections, but one intersection is very far away and can be ignored. 

  17. Several factors complicated the navigator's job. By the time the navigator completed a measurement, the aircraft could have moved dozens of miles, so the navigator needed to adjust the lines of position based on this movement. But the navigator didn't know exactly how much the aircraft had moved, due to wind and other factors. Thus, even with the Astro Compass, the navigator needed to deal with uncertainty, cross-checking between different measurements to try to get the best results despite constant sources of error. 

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Google banks on AI edge to catch up to cloud rivals Amazon and Microsoft

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Corporate Expansion Strategy: the company attempts to capture market share from established cloud competitors by integrating its proprietary hardware and software stacks.
  • Financial Margin Optimization: the firm claims significant cost advantages by avoiding external hardware and model licensing fees.
  • Infrastructure Vertical Integration: internal development of custom processors is framed as a strategic necessity to reduce reliance on third-party silicon suppliers.
  • Market Share Growth: the cloud division has increased its market presence to fourteen percent despite entering the landscape long after major rivals.
  • Competitive Displacement Claims: internal leadership suggests that home-grown tensor processing units outperform competing architectures from other large technology firms.
  • Capital Expenditure Scaling: massive investment in data centers and proprietary hardware continues at a rate reaching nearly two hundred billion dollars this year.
  • Vendor Relationship Conflict: strategic tension persists with primary hardware partners as the company seeks to commoditize the components it currently purchases.
  • Industry Consolidation Prediction: executive commentary anticipates a collapse of boutique ai startups that rely on unsustainable private capital and mounting operational losses.

Google’s cloud boss says that a pair of new chips and rapid advances at its DeepMind AI lab will help it close the gap with Microsoft and Amazon in the fiercely competitive cloud computing market.

Thomas Kurian said that after a slow start in AI and entering the cloud business late, Google’s “full-stack” AI strategy — which includes building chips, data centres, foundation models and products in-house — was starting to pay off. 

“We’re not just a hyperscaler reselling other people’s technology. Our differentiation comes down to the fact that we own the IP, the model and the chips are ours,” Kurian said in an interview.

“For every dollar of revenue, we’re not shipping 80 per cent of it to either a model or chip provider, which allows us to invest more,” he added.

Eight years after joining Alphabet from Oracle, Kurian has grown its cloud market share from 7 to 14 per cent — cementing his position as a contender to be Google’s next leader.

But Google Cloud remains a distant third to Amazon Web Services and Microsoft’s Azure in the $418bn cloud-computing market. Alphabet has also been criticised for allowing OpenAI and Anthropic’s chatbots and coding assistants to leapfrog its own AI products.

AI is now helping Google Cloud to grow faster than its rivals; it reported a 48 per cent jump in revenue in the final quarter of 2025 and is on track to generate more than $70bn this year, up from $43bn in 2024.

Google believes its TPUs and Gemini models are far ahead of Amazon’s Trainium chips and Nova AI system as well as Microsoft’s Maia processors and MAI models. This makes the search giant less dependent on partnerships with Anthropic and OpenAI or on Nvidia’s expensive GPU chips.

Kurian said that Google’s 12-year investment in DeepMind allowed it to continually improve its proprietary chips and deliver consumer and enterprise AI products at a lower cost with better margins.

Google unveiled two new chips this week at a splashy event in Las Vegas, the eighth generation of its TPUs, or Tensor Processing Units. One specialises in training AI models, while the other has more memory to run AI systems faster, known as inference.

“You need a large lab in-house to really build an amazing chip [and] I don’t think the other players are building their own models, of any quality at least,” Kurian said. Only Nvidia currently rivalled Google’s combination of AI hardware and integrated chip software, he added.

Google’s emergence as a competitor to Nvidia has strained the relationship between the two companies, even as Alphabet remains one of its biggest GPU customers.

A report from Epoch AI estimates that Google controls about a quarter of global AI computing power, about 3.8mn TPUs and 1.3mn GPUs. Microsoft is second with 3.2mn Nvidia GPUs.

In a recent podcast, Nvidia chief executive Jensen Huang criticised Google for not submitting its AI chips to independent tests and cast doubt on their performance and efficiency claims. 

He added that “100 per cent” of demand came from Anthropic and without the start-up “why would there be any TPU growth at all?”

Kurian responded that nine of the top 10 AI labs used TPUs, including ex-OpenAI executive Mira Murati’s Thinking Machines. OpenAI cannot because of an exclusivity deal with Microsoft.

“They have a choice of what to buy. If we were not competitive in performance, in price, in quality, they would choose not to do so,” he said.

Anthropic on Friday struck a deal under which it will buy more of Google’s chips. Google agreed to invest up to $40bn in the start-up and provide 5GW of computing capacity over five years, worth more than $200bn.

Google is also spending heavily on its in-house AI efforts, with capital expenditures forecast to rise to $185bn this year. Kurian argues the vast sums are justified by customer demand and strong revenues.

He said OpenAI and Anthropic face a more difficult financial path, which could also imperil Big Tech groups that rely on them. Both start-ups are losing tens of billions a year as they race to secure computing power to train and run their models.

“Those AI providers depend on private capital markets, which are reaching a saturation point,” he added. “If you’re going to go public, you can’t be lossmaking forever. And if you stay private, you cannot raise venture money forever.”

This year OpenAI and Anthropic raised more than $150bn in two of the largest private fundraisings in history as they prepared for IPOs. Dozens more start-ups have raised multibillion-dollar sums.

“Over the next year to two you will see some shakeout in the market,” Kurian said. Whether “particular providers are going to make it or not largely comes down to the economics”.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Can Starship succeed where the space shuttle struggled? | Science | AAAS

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Design goals: starship seeks full reusability with short turnaround times for orbital travel
  • Physical challenge: reentry requires dissipating massive kinetic energy stored as potential energy during launch
  • Velocity factors: spacecraft endure extreme speeds reaching mach 25 or seventeen thousand miles per hour
  • Thermal extremes: air impact during atmospheric descent generates temperatures between five thousand and seven thousand degrees celsius
  • Heat management: protection methods include thermal storage via insulating tiles and material ablation through phase changes
  • Space shuttle history: early tile technology relied on fragile silica materials that required intensive refurbishment after each flight
  • Payload tradeoffs: fuel consumption for controlled landings reduces the total cargo capacity available for orbital delivery
  • Future outlook: reliable full reusability for rocket systems is estimated to require a thirty year development timeline

Like the space shuttle before it, SpaceX’s giant reusable rocket Starship is built around the idea that a spacecraft should fly, land, and fly again. But Starship—whose newest version, Starship V3, is expected to launch for the first time as soon as next month—is designed from the outset for full reusability, with far shorter turnaround times than the shuttle. If the ambitious rocket works as intended, its launch and return could mark a turning point in how engineers think about getting to and from orbit, just as the shuttle once did.

Both vehicles, though, come with the same catch: the extreme conditions of atmospheric reentry while descending back to Earth. To understand why this phase of flight is so difficult and what it takes to shield reusable rockets, Science spoke with Stephen Whitmore, an aerospace engineer and director of the Propulsion Research Laboratory at Utah State University.

This interview has been edited for clarity and length.

Q: Why is atmospheric reentry often described as the hardest part of reusable spaceflight?

A: The conservation of energy—it doesn’t have anywhere to go. Look at a rocket and think of all the energy that’s stored in that rocket on the launch pad: all of that propellant, all that fire, everything else. All of that gets put into the spacecraft. When it’s up there in orbit, it has stored all of it as potential energy. When it comes back, it’s got to be released somewhere.

It ends up being recaptured by the impact of the high-velocity air. It’s entering the atmosphere at Mach 25, which is a little over 17,000 miles per hour [or nearly 30,000 kilometers per hour] from low-Earth orbit.

That’s basically the issue with atmospheric reentry: You’re taking all of that energy that was stored and getting you into orbit, and now you’ve got to return it. And it all happens in a very short period of time. From when they hit the atmospheric interface until they land is only about 15 minutes.

Advertisement

As a way to demonstrate how much energy is in a rocket plume, we’ll put a steel bar in the plume so people can watch that steel bar vaporize in about two-tenths of a second.

Q: How hot does reentry get?

A: The actual impact temperature of the air on the main leading edge is going to be on the order of 5000 to 7000°C, nearly 10,000°F. Nothing can survive those temperatures.

 Stephen Whitmore
Utah State University aerospace engineer Stephen Whitmore has worked for decades on spaceflight hardware, including on the development of a 3D-printed hybrid rocket motor seen here.Matt Jensen/Utah State University

Q: What are the main approaches to countering the intense heat of reentry?

A: One way is energy storage. You basically store it and release it slowly so that it doesn’t get into the crew area. That’s what the space shuttle did. They had tiles all over the impact areas, and those tiles had a very large heat capacity. The tiles brought a lot of heat in but very slowly released it.

When the shuttle landed, that heat would all still be in there and then it would get released. That’s why when you saw the shuttle land, [NASA] would send all those trucks with refrigerants and that kind of stuff to pump energy back out of the shuttle. Otherwise, the frame would have actually melted.

The other approach is ablation. You can think of it as being a material that, as hot air impacts it, acts as the heat shield and burns away. It’s just like when you’re melting ice—that latent heat of evaporation pulls heat away from the system. It’s a phase change going from a solid directly to a gas. It’s a very energy-intensive process, so that pulls a lot of the heat away.

And obviously no one knows exactly what the Starship uses for its heat shield because Elon [Musk, SpaceX’s CEO] keeps that very close to the vest, but it’s a combination of heat storage and ablation.

Q: So, in hindsight, where did the space shuttle go wrong?

A: The problem is that they had to make up the materials as they went. In the early days of building the shuttle tiles, it was guys with silica fiber using, believe it or not, commercial dryers—closed and tumbling—to actually make the stuff. And although they’re very, very resistant to heat, those tiles are very fragile. You pick up a shuttle tile and it weighs almost nothing, and so that was one of the issues.

The problem with the shuttle is it was such a new thing, and probably about 4 decades ahead of its time. There were enough flaws in it that it operationally became too expensive to maintain. It became such a valuable asset, especially after the Challenger accident back in 1986, that they could not even remotely have any potential issue.

So, every time it came back, they essentially had to rebuild the thing. Other than the airframe, they rebuilt a good portion of it.

Q: What concerns do you have about fully reusable rockets, versus only partially reusable rockets like SpaceX’s Falcon 9?

A: Commercially, it makes a lot of sense. But also remember the way that Starship does this—they have to use fuel to come down and put it into a controlled landing. That fuel takes away from the payload that will be delivered to orbit. Reusability is sacrificing the lift capacity of the rocket. There’s a trade space in there. SpaceX is a commercial venture, and their trade is to sacrifice payload in order to have full reusability. It’s more profitable.

And with reusability, you’ve got to salvage components, so they become extraordinarily valuable. And that does constrain your mission; it constrains what you can do in terms of operational risk. I think we’re looking at, probably, a 30-year timeline until fully reusable rockets become reliable.

Read the whole story
bogorad
2 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Musk v. Altman heads to court next week. Here's what's at stake

1 Share
  • Legal Dispute Initiation: Elon Musk And OpenAI Leadership Enter A Federal Court Trial Regarding Alleged Breaches Of Original Nonprofit Commitments
  • Financial Claims Impact: Litigation Seeks Potential Damages Totaling 134 Billion Dollars And Requests The Unwinding Of For Profit Restructuring
  • Organizational Evolution History: Founding Principles Focused On Humanity Benefit Led To Subsequent Structural Changes And For Profit Subsidiary Creation
  • Industry Rivalry Dynamics: Former Business Partners Have Become Competitive Adversaries Through The Formation Of Independent AI Ventures And Corporate Mergers
  • Judicial Process Structure: District Judge Yvonne Gonzalez Rogers Presides Over A Bifurcated Trial Consisting Of A Liability Phase Followed By A Remedies Phase
  • Remaining Litigation Scope: Four Original Claims Persist Including Unjust Enrichment And Breach Of Charitable Trust While Others Are Subject To Streamlining Efforts
  • Defense Position Stance: Startup Representatives Characterize Legal Actions As Evasive Tactics Driven By Ego And A Desire To Obstruct Market Competition
  • Future Market Developments: The Legal Outcome Remains A Potential Risk Factor Ahead Of Anticipated Public Market Debuts For Involved Entities

Elon Musk and Sam Altman go to court next week — here's what to expect from the trial

VIDEO2:3702:37

Elon Musk and Sam Altman go to court — here’s what to expect

Tech

A yearslong legal brawl between Elon Musk, the world’s richest man, and OpenAI CEO Sam Altman heads to court in Northern California on Monday in a dramatic showdown between two of the most high-profile names in the tech industry.

In his $134 billion lawsuit, Musk claimed that OpenAI, Altman and the company’s president, Greg Brockman, reneged on a vow they made to keep the artificial intelligence lab a nonprofit in perpetuity. OpenAI has since restructured so that it can operate a for-profit subsidiary, and it’s now valued at over $850 billion.

Musk and Altman were once close friends, and were among a group of techies who founded OpenAI in 2015 out of a shared concern over the potential power of AI and the need to advance it in ways that would benefit humanity.

Now they’re public enemies and bitter rivals, with Musk having started xAI as an OpenAI competitor in 2023 and recently merging it with SpaceX in a deal valuing the combined entity at $1.25 trillion. The trial lands as Musk is preparing to take SpaceX public in what will likely be a record IPO.

OpenAI is targeting a potential fourth-quarter market debut, as CNBC previously reported. In a document distributed to prospective investors earlier this year, OpenAI characterized the ongoing litigation with Musk as a potential risk to its business.

The startup has repeatedly dismissed Musk’s lawsuit as “baseless,” calling it a “harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor,” according to a post on X earlier in April.

The war of words has been going on for months.

“Scam Altman lies as easily as he breathes,” Musk wrote in August in a post on X, which is part of xAI.  

“Really excited to get Elon under oath in a few months, Christmas in April!,” Altman wrote on X in February. 

Jury selection in Musk v. Altman begins Monday in a federal courthouse in Oakland, just over the Bay Bridge from San Francisco, where OpenAI is headquartered. Should he succeed, Musk said, he wants the court to return all “ill-gotten gains” to OpenAI’s nonprofit, not to him personally. He’s also seeking to have Altman and Brockman removed from their roles and to “unwind OpenAI’s for-profit conversion and restructuring.” 

It’s not the only litigation Musk has brought against OpenAI. X, formerly Twitter, along with xAI sued OpenAI and Apple in 2025 for alleged anticompetitive behavior. A hearing in that case is scheduled for May in Texas. And in February, a federal judge in California dismissed a separate lawsuit from xAI that accused OpenAI of stealing its trade secrets.

Musk, Altman rivalry escalates with new OpenAI hire

VIDEO3:5503:55

Musk, Altman rivalry escalates with new OpenAI hire

TechCheck

The Musk-Altman spat dates back to 2018, when Musk left OpenAI’s board after a number of disagreements with Altman and Brockman about the company’s direction, including a failed effort to merge the startup with Tesla, Musk’s electric vehicle company. Following Musk’s departure, OpenAI established a for-profit subsidiary that allowed it to raise outside investments more easily. 

OpenAI briefly considered plans to transition into a for-profit company in 2024, which would have wrested control from the nonprofit and kept it as a separate arm. But after facing pressure from civic leaders and ex-employees, including Musk, it changed course. The company completed a recapitalization in October that cemented its structure as a nonprofit with a controlling stake in its for-profit business.

Musk sued OpenAI, Altman and Brockman in 2024, alleging that he was “assiduously manipulated” and “deceived” by their promises that the company “would chart a safer, more open course than profit-driven tech giants.”

But the scope of Musk’s claims have shifted dramatically in recent months, as well as his desired outcomes.

In a January filing, Musk’s attorneys said he should receive up to $134 billion in damages from OpenAI and Microsoft, one of OpenAI’s longtime backers, which is also named as a defendant in the lawsuit. Microsoft is accused of aiding and abetting OpenAI’s alleged misconduct on the breach of charitable trust claim.

Read more CNBC tech news

Of the 26 claims that Musk asserted against OpenAI, Altman and Brockman in November 2024, only four remain: unjust enrichment, fraud, constructive fraud, and breach of charitable trust. Musk’s lawyers are seeking to dismiss two of the claims, fraud and constructive fraud, ahead of the trial in an effort to “streamline the case,” according to a filing.

OpenAI’s lawyers on Wednesday characterized Musk’s actions as “evasive tactics.”

“Trial begins in five days but Plaintiff still refuses to state plainly what claims he will pursue and what remedies he will seek,” they wrote in a filing. 

Judge Yvonne Gonzalez Rogers, who was appointed by former President Barack Obama to U.S. District Court for the Northern District of California in 2011, is presiding over the case. Gonzalez Rogers has overseen several high-profile lawsuits involving technology companies, including the antitrust case between Epic Games and Apple.

Nine jurors will be seated, and there will be no alternates, according to a March filing.

Gonzalez Rogers opted to divide the trial into two parts: a liability phase to decide if any wrongdoing occurred, and a remedies phase to determine the appropriate damages and next steps. The jury will weigh in during the liability phase only, and its verdict will be advisory, which means Gonzalez Rogers will make the final decision in both sections of the trial. 

The liability phase of the trial is expected to last through mid-May, and the court will be in session from 8:30 a.m. to 1:40 p.m. PT every Monday through Thursday.

Jury selection will be followed by opening arguments. Gonzalez Rogers has given attorneys for Musk and OpenAI a total of around 20 hours each to present their case. Microsoft will get five hours, according to a filing

All three parties submitted a list of witnesses that they can call. Musk, Altman, Brockman and Microsoft CEO Satya Nadella are all named.

If OpenAI is found liable, Gonzalez Rogers will hear arguments for the remedies phase, which is scheduled to begin on May 18. 

“However, if the jury finds that Musk failed to file his action within the statute of limitations, it is highly likely that the Court will accept that finding and direct verdict to the defendants,” Gonzalez Rogers wrote. 

CNBC will be in the courtroom starting Monday. Follow the latest coverage here.

WATCH: How three possible mega-IPOs could upend markets

How three possible mega-IPOs could upend markets

VIDEO3:0403:04

How three possible mega-IPOs could upend markets

Closing Bell: Overtime

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

AI Is Cannibalizing Human Intelligence. Here’s How to Stop It. - WSJ

2 Shares
  • Experimental Design: Researchers compared human performance, AI performance, and human-AI hybrid teams in forecasting real-world events.
  • Predictive Accuracy: Large AI models outperformed humans working in isolation, though human-AI hybrids displayed the highest potential for total accuracy.
  • Hybrid Pitfalls: Many hybrid users relied on AI for direct answers, leading to poor outcomes characterized by confirmation bias and sycophancy.
  • Collaborative Cyborgs: A small subset of users treated AI as a sparring partner to interrogate assumptions and challenge AI-generated assertions.
  • Cognitive Requirements: Successful integration requires perspective-taking and intellectual humility rather than simple reliance on technological convenience.
  • Information Exploration Paradox: High volumes of easily accessible information may reduce critical thinking and individual exploration, potentially leading to human skill atrophy.
  • Strategic Reframe: AI should be utilized to search for what is missing in one's own logic rather than as a tool to automate routine labor.
  • Developmental Necessity: Cultivating cognitive resistance to AI-generated easy answers is essential for maintaining human agency and intellectual rigor.

By

Vivienne Ming

April 24, 2026 2:00 pm ET

10


A human hand and a robot hand on a computer keyboard, symbolizing AI and human intelligence.

EDMON DE HARO FOR WSJ; ROBOT ARM, FIREFLY

Who’s smarter, the human or the machine? 

In the 30 years I’ve worked in artificial intelligence that’s been the question driving the conversation. 

We’ve also been sold a story about AI that goes something like this: It will handle the tedious, routine work—the research, the first draft, the number-crunching—while we focus on the interesting parts: creativity, judgment, the human touch.  

My research suggests we’ve been asking the wrong question and drawing the wrong conclusions. 

A few months ago, I recruited adults from San Francisco’s Bay Area for an experiment. I gave each group one hour to make predictions about real-world events, using scenarios drawn from the prediction market platform Polymarket. This provided us a rigorous, objective way to check results against the collective wisdom of thousands of financially motivated forecasters. In addition to AI making predictions on its own, some human teams worked alone, while others worked as human-AI hybrids. (Polymarket has a data partnership with Dow Jones, the publisher of The Wall Street Journal).

The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models—ChatGPT and Gemini, in this case—performed considerably better, though still short of the market itself.

But when we combined AI with humans, things got more interesting.

Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These “validators” had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn’t true. They ended up performing worse than an AI working solo. 

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument. 

The hybrids were becoming cyborgs.

These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market’s accuracy. On certain questions, they even outperformed it.

It’s not that these people were more intelligent than the others in the study. But they demonstrated two important qualities: perspective-taking and intellectual humility.

Perspective-taking is the ability to genuinely inhabit another point of view. Not to debate it, not to tolerate it, but to actually inhabit it. Intellectual humility is the ability to recognize the edge of your own knowledge and sit with that discomfort rather than trying to rush to fill it.  

Both of these qualities are, at root, emotional skills. Perspective-taking requires genuine curiosity about minds other than your own. Intellectual humility requires a kind of emotional courage: the willingness to feel uncertain, even a little foolish, in the presence of something or someone that seems very sure of itself. 

These are not the soft skills we typically celebrate. We celebrate confidence. We promote decisiveness. We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it.  

What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They’re the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, “What’s missing?” rather than default to “Great, that’s done.” To disagree with something that sounds authoritative and to trust your instinct enough to follow it.

We don’t build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one’s mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically 

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

This is the divergence I worry about. Not the dramatic science-fiction scenario of AI replacing humans wholesale, but the quieter process of people gradually outsourcing their judgment in increments too small to notice. 

Over time, this produces two different kinds of people: Those who use AI as a genuine intellectual partner—whose thinking actually gets sharper through the friction of the collaboration—and those who get better at securing quick answers and worse at knowing what questions to ask.

So what can any of us actually do about it?

Start with the reframe: The goal of working with AI isn’t to get the answer faster. It’s to find out what you’re missing. Don’t deploy AI minions to “do the boring work” for you, as so many sales pitches argue; use it as a savant collaborator to explore uncertainty. 

In practice, that means before you accept an AI’s answer, ask it for the strongest argument against itself. When it hedges or qualifies, pay attention—that’s usually where the real uncertainty lives. Treat it like a brilliant colleague who has read everything and understands nothing—useful precisely because they’re different from you, not because they’ll agree with you.

For the AI industry, a key design question has gone largely unasked: Is the product building human capacity or consuming it? Nearly all AI benchmarks measure what AI agents can do alone. We desperately need benchmarks for hybrid intelligence. Errors are signals our brains use to trigger learning. An AI that eliminates friction entirely is often eliminating the learning along with it.

A hopeful finding is that perspective-taking, intellectual humility and curiosity are not fixed traits. They can be cultivated and respond to practice, the right relationships and environments that reward uncertainty. 

But they require us to decide—as individuals, as parents, as educators, as designers of tools—that this is what we’re trying to build. And in the race between human potential and human atrophy, the stakes for building it could not be higher.

Vivienne Ming is a theoretical neuroscientist, cognitive scientist and the author of “Robot-Proof: When Machines Have All The Answers, Build Better People.” 

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the April 25, 2026, print edition as 'Is AI Smarter Than People? It’s Complicated.'.


Up Next

[

17 Gifts for Every Type of Mom in Your Life

](https://www.wsj.com/lifestyle/mothers-day-gift-guide-4d335d10?mod=WTRN_pos1)

[

What to give when a bouquet doesn’t feel like enough.

](https://www.wsj.com/lifestyle/mothers-day-gift-guide-4d335d10?mod=WTRN_pos1)

Continue To Article


[

Is Masculinity in Crisis? Readers Are Divided

](https://www.wsj.com/arts-culture/television/male-masculinity-crisis-tv-b91a1231?mod=WTRN_pos2)

[

An article about television shows grappling with the state of men today elicited a range of opinions.

](https://www.wsj.com/arts-culture/television/male-masculinity-crisis-tv-b91a1231?mod=WTRN_pos2)

Continue To Article


[

I Snagged a $550 Business-Class Ticket to Italy. Then the Airline Found Out.

](https://www.wsj.com/lifestyle/travel/i-snagged-a-550-business-class-ticket-to-italy-then-the-airline-found-out-49a07167?mod=WTRN_pos4)

[

The Turkish Airlines deal was a mistake fare that the carrier can either honor or not. Mine was canceled.

](https://www.wsj.com/lifestyle/travel/i-snagged-a-550-business-class-ticket-to-italy-then-the-airline-found-out-49a07167?mod=WTRN_pos4)

Continue To Article


[

My Son Was Killed by Hamas. The Pain Isn’t Getting Better.

](https://www.wsj.com/arts-culture/books/rachel-goldberg-polin-hersh-book-de1f1913?mod=WTRN_pos5)

[

In an exclusive book excerpt, Rachel Goldberg-Polin writes about mourning her son, Hersh, nearly two years after his death.

](https://www.wsj.com/arts-culture/books/rachel-goldberg-polin-hersh-book-de1f1913?mod=WTRN_pos5)

Continue To Article


[

In Defense of Tween Screen Time

](https://www.wsj.com/lifestyle/tweens-screen-time-culture-girlhood-katherine-dee-305ff292?mod=WTRN_pos6)

[

Every generation panics about girlhood. The latest, over the negative consequences of social media for tweens, hides a bigger problem in girl world: They’ve got nothing else.

](https://www.wsj.com/lifestyle/tweens-screen-time-culture-girlhood-katherine-dee-305ff292?mod=WTRN_pos6)

Continue To Article


[

Injectable Peptides Are the Latest TikTok Wellness Fad. Doctors Are Worried.

](https://www.wsj.com/health/wellness/injectable-peptides-are-the-latest-tiktok-wellness-fad-doctors-are-worried-6a020013?mod=WTRN_pos7)

[

Regulators may soon lift restrictions on making the compounds despite scant evidence of their safety or efficacy.

](https://www.wsj.com/health/wellness/injectable-peptides-are-the-latest-tiktok-wellness-fad-doctors-are-worried-6a020013?mod=WTRN_pos7)

Continue To Article


[

Intel’s stock extends its spectacular run by posting its best day in nearly four decades

](https://www.marketwatch.com/story/intels-stock-extends-its-spectacular-run-and-could-see-its-best-daily-gain-on-record-6e6ef0de?mod=WTRN_pos8)

[

There’s a debate on Wall Street about whether Intel’s financial prospects have materially changed.

](https://www.marketwatch.com/story/intels-stock-extends-its-spectacular-run-and-could-see-its-best-daily-gain-on-record-6e6ef0de?mod=WTRN_pos8)

Continue To Article


[

Gina Rodriguez’s Home in ‘Jane the Virgin’ Hits the Market in Los Angeles for $2.9 Million

](https://www.mansionglobal.com/articles/gina-rodriguezs-home-in-jane-the-virgin-hits-the-market-in-los-angeles-for-2-9-million-ddd198e9?mod=WTRN_pos9)

[

The Spanish-style residence served as the family home of the titular character, Jane Villanueva, in the CW series

](https://www.mansionglobal.com/articles/gina-rodriguezs-home-in-jane-the-virgin-hits-the-market-in-los-angeles-for-2-9-million-ddd198e9?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
cherjr
3 hours ago
reply
48.840867,2.324885
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories