Robert Miles AI Safety

@RobertMilesAI - 51 本の動画

チャンネル登録者数 16.2万人

Videos about Artificial Intelligence Safety Research, for everyone. AI is leaping forward right now, it's only a matter of time before we develop true Artifi...

最近の動画

AI Safety Career Advice! (And So Can You!) 23:42

AI Safety Career Advice! (And So Can You!)

Using Dangerous AI, But Safely? 30:38

Using Dangerous AI, But Safely?

Learn AI Safety at MATS #shorts 1:00

Learn AI Safety at MATS #shorts

AI Ruined My Year 45:59

AI Ruined My Year

Apply to Study AI Safety Now! #shorts 1:00

Apply to Study AI Safety Now! #shorts

Why Does AI Lie, and What Can We Do About It? 9:24

Why Does AI Lie, and What Can We Do About It?

Apply Now for a Paid Residency on Interpretability #short 0:45

Apply Now for a Paid Residency on Interpretability #short

$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short 1:00

$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short

Free ML Bootcamp for Alignment #shorts 0:52

Free ML Bootcamp for Alignment #shorts

Win $50k for Solving a Single AI Problem? #Shorts 1:00

Win $50k for Solving a Single AI Problem? #Shorts

Apply to AI Safety Camp! #shorts 1:00

Apply to AI Safety Camp! #shorts

We Were Right! Real Inner Misalignment 11:47

We Were Right! Real Inner Misalignment

Intro to AI Safety, Remastered 18:05

Intro to AI Safety, Remastered

Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think... 10:20

Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...

The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment 23:24

The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment

Quantilizers: AI That Doesn't Try Too Hard 9:54

Quantilizers: AI That Doesn't Try Too Hard

Sharing the Benefits of AI: The Windfall Clause 11:44

Sharing the Benefits of AI: The Windfall Clause

10 Reasons to Ignore AI Safety 16:29

10 Reasons to Ignore AI Safety

9 Examples of Specification Gaming 9:40

9 Examples of Specification Gaming

Training AI Without Writing A Reward Function, with Reward Modelling 17:52

Training AI Without Writing A Reward Function, with Reward Modelling

AI That Doesn't Try Too Hard - Maximizers and Satisficers 10:22

AI That Doesn't Try Too Hard - Maximizers and Satisficers

Is AI Safety a Pascal's Mugging? 13:41

Is AI Safety a Pascal's Mugging?

A Response to Steven Pinker on AI 15:38

A Response to Steven Pinker on AI

How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification 11:32

How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification

Why Not Just: Think of AGI Like a Corporation? 15:27

Why Not Just: Think of AGI Like a Corporation?

Safe Exploration: Concrete Problems in AI Safety Part 6 13:46

Safe Exploration: Concrete Problems in AI Safety Part 6

Friend or Foe? AI Safety Gridworlds extra bit 3:47

Friend or Foe? AI Safety Gridworlds extra bit

AI Safety Gridworlds 7:23

AI Safety Gridworlds

Experts' Predictions about the Future of AI 6:47

Experts' Predictions about the Future of AI

Why Would AI Want to do Bad Things? Instrumental Convergence 10:36

Why Would AI Want to do Bad Things? Instrumental Convergence

Superintelligence Mod for Civilization V 1:04:40

Superintelligence Mod for Civilization V

Intelligence and Stupidity: The Orthogonality Thesis 13:03

Intelligence and Stupidity: The Orthogonality Thesis

Scalable Supervision: Concrete Problems in AI Safety Part 5 5:03

Scalable Supervision: Concrete Problems in AI Safety Part 5

AI Safety at EAGlobal2017 Conference 5:30

AI Safety at EAGlobal2017 Conference

AI learns to Create  ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1 5:20

AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1

What can AGI do? I/O and Speed 10:41

What can AGI do? I/O and Speed

What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4 9:38

What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4

Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5 7:32

Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5

The other "Killer Robot Arms Race" Elon Musk should worry about 5:51

The other "Killer Robot Arms Race" Elon Musk should worry about

Reward Hacking: Concrete Problems in AI Safety Part 3 6:56

Reward Hacking: Concrete Problems in AI Safety Part 3

Why Not Just: Raise AI Like Kids? 5:51

Why Not Just: Raise AI Like Kids?

Empowerment: Concrete Problems in AI Safety part 2 6:33

Empowerment: Concrete Problems in AI Safety part 2

Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5 3:23

Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5

Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1 9:33

Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1

Robert Miles Live Stream

Robert Miles Live Stream

Are AI Risks like Nuclear Risks? 10:13

Are AI Risks like Nuclear Risks?

Respectability 5:04

Respectability

Predicting AI: RIP Prof. Hubert Dreyfus 8:17

Predicting AI: RIP Prof. Hubert Dreyfus

What's the Use of Utility Functions? 7:04

What's the Use of Utility Functions?

Where do we go now? 7:45

Where do we go now?

Status Report 1:26

Status Report

Channel Introduction 1:05

Channel Introduction

動画

AI Safety Career Advice! (And So Can You!) 23:42

AI Safety Career Advice! (And So Can You!)

4.2万 回視聴 - 5 日前

Using Dangerous AI, But Safely? 30:38

Using Dangerous AI, But Safely?

12万 回視聴 - 6 か月前

AI Ruined My Year 45:59

AI Ruined My Year

25万 回視聴 - 11 か月前

Why Does AI Lie, and What Can We Do About It? 9:24

Why Does AI Lie, and What Can We Do About It?

26万 回視聴 - 2 年前

We Were Right! Real Inner Misalignment 11:47

We Were Right! Real Inner Misalignment

25万 回視聴 - 3 年前

Intro to AI Safety, Remastered 18:05

Intro to AI Safety, Remastered

17万 回視聴 - 3 年前

Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think... 10:20

Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...

8.8万 回視聴 - 3 年前

The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment 23:24

The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment

24万 回視聴 - 4 年前

Quantilizers: AI That Doesn't Try Too Hard 9:54

Quantilizers: AI That Doesn't Try Too Hard

8.7万 回視聴 - 4 年前

Sharing the Benefits of AI: The Windfall Clause 11:44

Sharing the Benefits of AI: The Windfall Clause

8万 回視聴 - 4 年前

10 Reasons to Ignore AI Safety 16:29

10 Reasons to Ignore AI Safety

34万 回視聴 - 4 年前

9 Examples of Specification Gaming 9:40

9 Examples of Specification Gaming

31万 回視聴 - 5 年前