FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Lessons from the Baruch Plan for Nuclear Weapons

The invention of atomic energy posed a novel global challenge: could the technology be controlled to avoid destructive uses and an existentially dangerous arms race while permitting the broad sharing of its benefits? From 1944 onwards, scientists, policymakers, and other technical specialists began to confront this challenge and explored policy options for dealing with the impact of nuclear technology. We focus on the years 1944 to 1951 and review this period for lessons for the governance of powerful technologies, and find the following: Radical schemes for international control can get broad support when confronted by existentially dangerous technologies, but this support can be tenuous and cynical. Secrecy is likely to play an important, and perhaps harmful, role. The public sphere may be an important source of influence, both in general and in particular in favor of cooperation, but also one that is manipulable and poorly informed. Technical experts may play a critical role, but need to be politically savvy. Overall, policymaking may look more like “muddling through” than clear-eyed grand strategy. Cooperation may be risky, and there may be many obstacles to success.

That is by Waqar Zaidi and Allan Dafoe, at the Centre for Governance of AI, exactly the kind of work people should be doing.

The post Lessons from the Baruch Plan for Nuclear Weapons appeared first on Marginal REVOLUTION.

Did Ottoman Sultans ban print?

Did printing transform the Ottoman Empire? And what took the Ottomans so long to print? Much of the scholarship surrounding the topic of Ottoman printing, or the occurrence of printing within the Ottoman Empire (1453–1922), is structured around these two related frameworks. In this essay, I argue that these frameworks are ahistorical because they predicate Ottoman printing on the European experience of print. To support this point, I examine the disproportionate role played by certain early modern European accounts of Ottoman printing within Western and Arabic historiography. In particular, I examine the life cycle of scholars’ belief that Ottoman sultans banned printing, which I contrast with extant documentation for the imperial Porte’s stance on printing. I argue that the sources available to scholars today do not support the notion that the sultans banned printing. Rather, they demonstrate that this claim arose from early modern European scholars’ search to articulate their sense of Ottoman inadequacy through explanations for why Ottomans did not print. The history of this particular line of inquiry is significant, I argue, because many scholars continue to probe the issue of why Ottomans did not print. In so doing, they maintain the expectation that print would revolutionize society, even though they have begun questioning the existence of the ban.

That is from Kathryn A. Schwartz, in Print History (jstor).  Via Benedikt A.

The post Did Ottoman Sultans ban print? appeared first on Marginal REVOLUTION.

The Nuclear Non-proliferation Treaty and existential AGI risk

The Nuclear Non-Proliferation Treaty, activated in 1970, has been relatively successful in limiting nuclear proliferation.  When it comes to nuclear weapons, it is hard to find good news, but the treaty has acted as one deterrent of many to nation-states acquiring nuclear arms.  Of course the treaty works, in large part, because the United States (working with allies) has lots of nuclear weapons, a powerful non-nuclear military, de facto control of SWIFT, and so on.  We strongly encourage nations not to go acquiring nuclear weapons — just look at the current sanctions on Iran, noting the policy does not always succeed.

One approach to AI risk is to treat it like nuclear weapons and also their delivery systems.  Let the United States get a lead, and then hope the U.S. can (in conjunction with others) enforce “OK enough” norms on the rest of the world.

Another approach to AI risk is to try to enforce a collusive agreement amongst all nations not to proceed with AI development, at least along certain dimensions, or perhaps altogether.

The first of these two options seems obviously better to me.  But I am not here to argue that point, at least not today.  Conditional on accepting the superiority of the first approach, all the arguments for AI safety are arguments for AI continuationism.  (And no, this doesn’t mean building a nuclear submarine without securing the hatch doors.)  At least for the United States.  In fact I do support a six-month AI pause — for China.  Yemen too.

It is a common mode of presentation in AGI circles to present wordy, swirling tomes of multiple concerns about AI risk.  If some outside party cannot sufficiently assuage all of those concerns, the writer is left with the intuition that so much is at stake, indeed the very survival of the world, and so we need to “play it safe,” and thus they are lead to measures such as AI pauses and moratoriums.

But that is a non sequitur.  The stronger the safety concerns, the stronger the arguments for the “America First” approach.  Because that is the better way of managing the risk.  Or if somehow you think it is not, that is the main argument you must make and persuade us of.

(Scott Alexander has a new post “Most technologies aren’t races,” but he doesn’t either choose one of the two approaches listed above, nor does he outline a third alternative.  Fine if you don’t want to call them “races,” you still have to choose.  As a side point, once you consider delivery systems, nuclear weapons are less of a yes/no thing than he suggests.  And this postulated take is a view that nobody holds, nor did we practice it with nuclear weapons: “But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.”  On the terminology, Rohit is on target.  Furthermore, good points from Erusian.  And this claim of Scott’s shows how far apart we are in how we consider institutional and also physical and experimental constraints: “In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships.”)

Addendum:

As a side note, if the real issue in the safety debate is “America First” vs. “collusive international agreement to halt development,” who are the actual experts?  It is not in general “the AI experts,” rather it is people with experience in and study of:

1. Game theory and collective action

2. International agreements and international relations

3. National security issues and understanding of how government works

4. History, and so on.

There is a striking tendency, amongst AI experts, EA types, AGI writers, and “rationalists” to think they are the experts in this debate.  But they are only on some issues, and many of those issues (“new technologies can be quite risky”) are not so contested. And because these individuals do not frame the problem properly, they are doing relatively little to consult what the actual “all things considered” experts think.

The post The Nuclear Non-proliferation Treaty and existential AGI risk appeared first on Marginal REVOLUTION.

What should I ask Kevin Kelly?

From Wikipedia:

Kevin Kelly (born 1952) is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Review. He has also been a writer, photographer, conservationist, and student of Asian and digital culture

Among Kelly’s personal involvements is a campaign to make a full inventory of all living species on earth, an effort also known as the Linnaean enterprise. He is also sequencing his genome and co-organizes the Bay Area Quantified Self Meetup Group.

His Out of Control is a wonderful Hayekian book.  His three-volume Vanishing Asia is one of the greatest picture books of all time.  His new book (I haven’t read it yet) is Excellent Advice for Living: Wisdom I Wish I’d Known Earlier.  Here is Kevin on Twitter, here is his home page.

I will be doing a Conversation with him, so what should I ask?

The post What should I ask Kevin Kelly? appeared first on Marginal REVOLUTION.

Are smarter entities more coherent?

There is an assumption behind this misalignment fear, which is that a superintelligent AI will also be supercoherent in its behavior1. An AI could be misaligned because it narrowly pursues the wrong goal (supercoherence). An AI could also be misaligned because it acts in ways that don’t pursue any consistent goal (incoherence). Humans — apparently the smartest creatures on the planet — are often incoherent. We are a hot mess of inconsistent, self-undermining, irrational behavior, with objectives that change over time. Most work on AGI misalignment risk assumes that, unlike us, smart AI will not be a hot mess.

In this post, I experimentally probe the relationship between intelligence and coherence in animals, people, human organizations, and machine learning models. The results suggest that as entities become smarter, they tend to become less, rather than more, coherent. This suggests that superhuman pursuit of a misaligned goal is not a likely outcome of creating AGI.

That is from a new essay by Jascha Sohl-Dickstein, speculative but interesting.  Via N.

The post Are smarter entities more coherent? appeared first on Marginal REVOLUTION.

Which political tweets do best?

Analyzing every tweet of all US senators holding office from 2013 to 2021 (861,104 tweets from 140 senators), we identify a psycholinguistic factor, greed communication [TC: basically accusing other people of greed], that robustly predicts increased approval (favorites) and reach (retweets). These effects persist when tested against diverse established psycholinguistic predictors of political content dissemination on social media and various other psycholinguistic variables. We further find that greed communication in the tweets of Democratic senators is associated with greater approval and retweeting compared to greed communication in the tweets of Republican senators, especially when those tweets also mention political outgroups.

That is from new research by Eric J. Mercadante, Jessica L. Tracy, and Friedrich M. Götz.  Via David Lilienfeld.

The post Which political tweets do best? appeared first on Marginal REVOLUTION.

Better predicting food crises

Anticipating food crisis outbreaks is crucial to efficiently allocate emergency relief and reduce human suffering. However, existing predictive models rely on risk measures that are often delayed, outdated, or incomplete. Using the text of 11.2 million news articles focused on food-insecure countries and published between 1980 and 2020, we leverage recent advances in deep learning to extract high-frequency precursors to food crises that are both interpretable and validated by traditional risk indicators. We demonstrate that over the period from July 2009 to July 2020 and across 21 food-insecure countries, news indicators substantially improve the district-level predictions of food insecurity up to 12 months ahead relative to baseline models that do not include text information. These results could have profound implications on how humanitarian aid gets allocated and open previously unexplored avenues for machine learning to improve decision-making in data-scarce environments.

Here is more from Ananth Balashankar, Lakshminarayanan Subramanian, and Samuel P. Fraiberger.

The post Better predicting food crises appeared first on Marginal REVOLUTION.

Will remote work promote more family formation?

new paper puts forth a fascinating theory: Maybe remote work is making it easier for couples to become parents—and for parents to have more children.

The economist Adam Ozimek and the demographer Lyman Stone looked at survey data of 3,000 American women from the Demographic Intelligence Family Survey. They concluded that female remote workers were more likely to intend to have a baby than all-office workers, especially if they were richer, older, and more educated. What’s more, remote workers in the survey were more likely to marry in the next year than their nonremote counterparts.

Remote work might promote family formation in a few ways. Remote workers can move more easily, because they don’t have to live within commuting distance of their job. This flexibility might result in more marriages by ending the “two-body problem,” where romantic partners find employment in different cities and must choose between their career and their relationship. What’s more, remote work reduces commutes, and those weekly hours can be shifted to family time, making it easier to start or grow a family.

Fertility is an awkward topic for journalists, because starting a family is such a complicated and intimate decision. But fertility rates aren’t declining simply because more people are choosing not to have children—American women report having fewer kids than they want, as Stone has documented in previous research. If remote work is subtly restructuring the contours of life to enable more women to have the families they want, that’s great news.

That is all from Derek Thompson at The Atlantic.

The post Will remote work promote more family formation? appeared first on Marginal REVOLUTION.

Ethan Mollick had a good Bing Chat interaction

What are four sentences that I could send back in time to Ancient Rome, and that they would understand, to teach them technologies which could prevent Rome’s collapse?

Here is the full post, interesting throughout.  Overall, academics have responded to GPTs in a pretty mediocre, non-insightful fashion, but Mollick is one of the few who has been on the ball in a positive way.

The post Ethan Mollick had a good Bing Chat interaction appeared first on Marginal REVOLUTION.

Smart phones and their problems

As Noah Smith points out in his recent Substack, smart phones are conceptually distinct from social media.  To some (modest) extent, it might be smart phones making young (and old?) people unhappy.

Speaking on a purely subjective level, I hate smart phones.  Better yet, I hate phones.

I love my iPad, and now once again I have two of them.  Apart from disliking phone calls, the iPhone itself frustrates me.  It is simply too small to make me happy.  I don’t like looking at it, and I use it instead of the iPad only in very restricted situations, such as the iPad being on the other side of the room and the iPhone still being in my pocket.

Have you noticed that no one can call you on your iPad?  Well, actually they can, but not the way I have it set up.

It is also bulkier to take out the iPad and hold it and use it.  That limits the number of times you check it, relative to an iPhone.

So maybe an iPad-based world would be somewhat healthier, mentally and otherwise.  The more general point is that we might be able to improve the psychological architecture of iPhones.

As a side query, if social media on iPhones are a harmful addiction, why do we observe so few attempts to quit them?  Why do we observe relatively few self-constraint devices for social media use?

The post Smart phones and their problems appeared first on Marginal REVOLUTION.

What young people say is driving changes in their mental health

I think consistent with Matt's point here — you see young people identifying social media as a source of worsening mental health; but also emphasizing politics and national media.https://t.co/scOXEMNFjJ https://t.co/MGgdNIsz0m pic.twitter.com/hY47EOZFAO

— Arpit Gupta (@arpitrage) March 1, 2023

I would stress this point. Most of the explanations for declining teen mental health invoke contagion at one stage of the argument or another. That in turn means the initial causes can be fairly small, relative to the final outcome. Contagion itself is arguably the most important cause.

The post What young people say is driving changes in their mental health appeared first on Marginal REVOLUTION.

South Park Commons — the collectives model for spurring innovation

From the NYT circa 2017:

…the [South Park] Commons aims to fill a hole in the tech landscape. Northern California is littered with incubators and accelerators, organizations like Y Combinator and Techstars that help small companies develop and grow. This is something different, a community you can join before you have founded a company or even when you have little interest in founding one.

The Commons is a bit like the hacker spaces that have long thrived in the Valley — places where coders and makers gather to build new software and hardware — but it moves beyond that familiar concept. Its founder, for one thing, is a female engineer turned entrepreneur turned executive.

From SPC itself:

SPC is a de-risking platform. The community addresses the social and intellectual components of risk—it provides a close-knit, high-talent group during the idea stage so members can reach founder-market fit before attempting product-market fit. The SPC Fund plays the more traditional role of de-risking finances: our recently-launched Community Grant works much like Emergent Ventures; the Founder Fellowship (we’re currently accepting applications) is designed to get would-be founders to take the plunge; and we participate in the broader VC ecosystem with some later-stage investments.

Reminds me of the Junto Club, not to mention the 18th century more broadly;  SPC itself cites Junto as a model.  Think of it as a technical community of people without full-time jobs, plus a venture fund.  On the ground, technologists hang out with potential founders.  Here is TechCrunch on SPC.

Which are other recent examples of successful “community” models for spurring innovation?

The post South Park Commons — the collectives model for spurring innovation appeared first on Marginal REVOLUTION.

From my WhatsApp

Tyler Cowen: I am, by the way, not so convinced by the Jon Haidt piece on social media and mental illness: https://jonathanhaidt.substack.com/p/social-media-mental-illness-epidemic?utm_source=post-email-title&publication_id=1221094&post_id=104255435&isFreemail=true&utm_medium=email
Tyler Cowen: He readily admits that across individuals social media use explains only a tiny portion of the variation in happiness. His response is that it is other people’s usage of social media that makes you unhappy, because you can’t go talk to them.
Tyler Cowen: So there are (in his view) only system-wide effects, nothing that can be verified at the micro level. [TC: note the “to the point” style of WhatsApp leads to certain exaggerations and inaccuracies]
Tyler Cowen: It seems to me that if the stuff makes you so miserable, young people should be able to build small social “pods” of individuals who don’t do the stuff so much, hang out together, and are just way happier.
Tyler Cowen: Furthermore, if a lot of the problem is “young girls comparing themselves to thinner others on Instagram” and the like, that should show up as an individual-level effect. Not a group effect. There can be those beautiful, envy-inducing models on Instagram even if only a small percentage of one’s peers are on Instagram.

Tyler Cowen: I agree the problem is larger with girls. And I think it is a mix of bullying, cyberbullying, envy, and unrealistic expectations. I just don’t think it is nearly as a large a problem as he claims.
Tyler Cowen: And I think that often “going to talk to other people” — you know the “Mean Girls” — is the problem itself. In that sense his various hypotheses contradict each other.

You will note also the recent result that school closures lower not raise the rate of youth suicide.  It is thus hard for me to string together the hypothesis of a) “youth suicide rates are way up,” b) “this is because of social media,” and c) “social media make us miserable by taking away people to talk to and hang out with.”

The post From my WhatsApp appeared first on Marginal REVOLUTION.

New Facebook/Meta LLMs — researchers can apply for access

Today we release LLaMA, 4 foundation models ranging from 7B to 65B parameters.
LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B.
The weights for all models are open and available at https://t.co/q51f2oPZlE
1/n pic.twitter.com/DPyJFBfWEq

— Guillaume Lample (@GuillaumeLample) February 24, 2023

The post New Facebook/Meta LLMs — researchers can apply for access appeared first on Marginal REVOLUTION.

Which weapons in a major conflict can be most easily scaled?

Classic artillery and bombs end up being the default because they have more than enough range for the modern battlefield and are some of the few mass producible weapons.

And:

At the beginning of a war, both sides will have stocks of long-range weapons, satellites, and other recon assets. Most of those will likely be gone within a few weeks.

And:

Ammunition will be the most voracious microchip user.

Here is more from Austin Vernon on what the next major war will look like.

The post Which weapons in a major conflict can be most easily scaled? appeared first on Marginal REVOLUTION.

Predictions from *Average is Over*

My book is from 2013, here are some of the key predictions:

1. Increases in the power and generality of artificial intelligence will prove a major breakthrough within a foreseeable time period.

2. Labor market returns will accrue to individuals capable and willing to work with such services.

3. Resources and land are going to significantly increase in economic value, as they will remain relatively scarce.

4. Marketing will continue to rise in relative importance.

5. Managerial and “soft skills” will continue to increase in importance for earnings.

6. What we now call “quiet quitting” will be a thing.

7. At many corporations it will be possible to dismiss large numbers of workers without any decline in output.

8. Cheating with AI will arise as an issue of major importance, starting with cheating in chess, and the work of Kenneth Regan will turn out to be significant.

9. AI assessments of everything will rise in importance.

10. AI will produce more and more outputs that are so smart we will not be able to evaluate them as humans.

11. Free or near-free effective on-line education soon would become available, though it will remain an open question how many individuals will be interested in learning from it.

12. Good teaching would evolve more toward coaching and mentorship, as information provision will be handled by AI.

13. Intelligent machines soon will become effective producers of science, yet how they arrive at their results will not be legible to us.

14. With the aid of AI, there will be a resurgence of amateur science.

15. Machine learning and its successors will take over economic research.

Of course not all of those predictions have come true, but many have or others are on the verge of realization.  The subtitle of the book is Powering America Beyond the Age of the Great Stagnation.

The post Predictions from *Average is Over* appeared first on Marginal REVOLUTION.

Emergent Ventures winners, 24th cohort

Shakked Noy, MIT economics, to do RCTs on GPTs as teaching and learning tools.

Gabriel Birnbaum, Bay Area, from Fortaleza, Brazil, to investigate lithography as a key technology used in the manufacturing of microchips.

Moritz Wallawitsch, Berkeley. RemNote is his company, educational technology, and to develop a complementary podcast and for general career development.

Katherine Silk, Boston/Cambridge, general career support and to support advice for early-stage startups.

Benjamin Schneider, Brooklyn.  To write a book on the new urbanism.

Joseph Walker, Sydney, Australia, to run and expand the Jolly Swagman podcast.

Avital Balwit, Bay area, travel grant and general career development.

Benjamin Chang, Cambridge, MA. General career support, “I will develop novel RNA riboswitches for gene therapy control in human cells using machine learning.”

Daniel Kang, Berkeley/Champagne-Urbana, biometrics and crypto.

Aamna Zulfifiqar, Karachi, Pakistan, to attend UK higher education to study economics.

Jeremy Stern, Glendale, CA, Tablet magazine.  To write a book.

James Meech, PhD student, Cambridge, UK, to work on a random number generator for better computer architectures.

Arthur Allshire, University of Toronto, background also in Ireland and Australia, robotics and support to attend conferences.

Jason Hausenloy, 17, Singapore, travel and general career development, issues surrounding artificial intelligence.

Sofia Sanchez, Metepec, Mexico, biology and agricultural productivity, to spend a summer at a Stanford lab.

Ukraine tranche:

Andrey Liscovich, eastern Ukraine, formerly of Harvard, to provide equipment for public transportation, communication and emergency power generation to civilian authorities of frontline-adjacent areas in Ukraine which have lost vital infrastructure.

Chris Nicholson, Bay area, working as a broker to maintain internet connectivity in Ukraine.

Andrii Nikolaiev, Arsenii Nikolaiev, Zarina Kodyrova, Kvanta, to advance Ukrainian mathematics, help and train math Olympiad winners.

As usual, India and Africa/Caribbean tranches will be reported separately.

The post Emergent Ventures winners, 24th cohort appeared first on Marginal REVOLUTION.

A prediction from *Big Business*

As the years pass, search engines will compete across new and hitherto unforeseen dimensions, just as Apple and many other competitors knocked out Nokia cell phones.  There is no particular reason to think Google will dominate those new dimensions, and in fact Google’s success may stop it from seeing the new paradigms when they come along.  I don’t pretend I am the one who can name those new dimensions of competition, but what about search through virtual or augmented reality?  Search through the Internet of Things?  Search through the offline “real world” in some manner?  Search through an assemblage of AI capabilities, or perhaps in some longer-run brain implants…

p.104, here is the book (by me).

The post A prediction from *Big Business* appeared first on Marginal REVOLUTION.

Why AI will not create unimaginable fortunes

From my Bloomberg column from last week:

A small number of AI services, possibly even a single one, likely will end up better than the others for a wide variety of purposes. Such companies might buy the best hardware, hire the best talent and manage their brands relatively well. But they will face competition from other companies offering lesser (but still good) services at a lower price. When it comes to LLMs, there is already a proliferation of services, with Baidu, Google and Anthropic products due in the market. The market for AI image generation is more crowded yet.

In economic terms, the dominant AI company might turn out to be something like Salesforce. Salesforce is a major seller of business and institutional software, and its products are extremely popular. Yet the valuation of the company, as of this writing, is about $170 billion. That’s hardly chump change, but it does not come close to the $1 trillion valuations elsewhere in the tech sector.

OpenAI, a current market leader, has received a private valuation of $29 billion. Again, that’s not a reason to feel sorry for anyone — but there are plenty of companies you might not have heard of that are worth far more. AbbVie, a biopharmaceutical corporation, has a valuation of about $271 billion, almost 10 times higher than OpenAI’s.

To be clear, none of this is evidence that AI will peter out. Instead, AI services will enter almost everyone’s workflow and percolate through the entire economy. Everyone will be wealthier, most of all the workers and consumers who use the thing. The key ideas behind AI will spread and be replicated — and the major AI companies of the future will face plenty of competition, limiting their profits.

In fact, AI’s ubiquity may degrade its value, at least from a market perspective. It’s likely the AI boom has yet to peak, but the speculative fervor is almost palpable. Share prices have responded to AI developments enthusiastically. Buzzfeed shares rose 150% in one day last month, for example, after the company announced it would use AI to generate content. Does that really make sense, given all the competition BuzzFeed faces?

It’s when those prices and valuations start falling that you will know the AI revolution has truly arrived. In the end, the greatest impact of AI may be on its users, not its investors or even its inventors.

We’ll see how those predictions hold up.

The post Why AI will not create unimaginable fortunes appeared first on Marginal REVOLUTION.

❌