Artwork

Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

LW - Transfer Learning in Humans by niplav

23:31
 
Condividi
 

Manage episode 413907890 series 3337129
Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transfer Learning in Humans, published by niplav on April 22, 2024 on LessWrong. I examine the literature on transfer learning in humans. Far transfer is difficult to achieve, best candidate interventions are to practice at the edge of one's ability and make many mistakes, evaluate mistakes after one has made them, learn from training programs modeled after expert tacit knowledge, and talk about on one's strategies when practicing the domain. When learning, one would like to progress faster, and learn things faster. So it makes sense to search for interventions that speed up learning (effective learning techniques), enable using knowledge and knowledge patterns from one learned domain in a new domain if appropriate (transfer learning), and make it easier to find further learning-accelerating techniques (meta-learning). Summary I've spent ~20 hours reading and skimming papers and parts of books from different fields, and extracting the results from them, resulting spreadsheet here, google doc with notes here. I've looked at 50 papers, skimmed 20 and read 10 papers and 20% of a book. In this text I've included all sufficiently-different interventions I've found that have been tested empirically. For interventions tried by scientists I'd classify them into (ordered by how relevant and effective I think they are): Error-based learning in which trainees deliberately seek out situations in which they make mistakes. This has medium to large effect sizes at far transfer. Long Training Programs: These usually take the form of one- or two-semester long classes on decision-making, basic statistics and spatial thinking, and produce far transfer at small to medium effect sizes. Such programs take a semester or two and are usually tested on high-school students or university students. Effective Learning Techniques: Things like doing tests and exercises while learning, or letting learners generate causal mechanisms, which produce zero to or best small amounts of far transfer but speed up learning. OODA-loop-likes: Methods that structure the problem-solving process, such as the Pólya method or DMAIC. In most cases, these haven't been tested well or at all, but they are popular in the business context. Also they look all the same to me, but probably have the advantage of functioning as checklists when performing a task. Transfer Within Domains: Methods that are supposed to help with getting knowledge about a particular domain from an expert to a trainee, or from training to application on the job. Those methods have a high fixed cost since experts have to be interviewed and whole curricula have to be created, but they work very well at the task they've been created for (where training sometimes is sped up by more than an order of magnitude). Additionally, most of the research is on subjects which are probably not intrinsically motivated to apply a technique well (i.e. high school students, military trainees, and university students), so there is a bunch of selection pressure on techniques which still work with demotivated subjects. I expect that many techniques work much better with already motivated subjects, especially ones that are easy to goodhart. In general, the tension I was observing is that industry and the military are the ones who perform well/do non-fake things, but academia are the ones who actually measure and report those measures to the public. From when I've talked with people from industry, they don't seem at all interested in tracking per-employee performance (e.g. Google isn't running RCTs on their engineers to increase their coding performance, and estimates for how long projects will take are not tracked & scored). I also haven't seen many studies quantifying the individual performance of employees, especially high-earning white collar knowledge-workers. Recomme...
  continue reading

1657 episodi

Artwork
iconCondividi
 
Manage episode 413907890 series 3337129
Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transfer Learning in Humans, published by niplav on April 22, 2024 on LessWrong. I examine the literature on transfer learning in humans. Far transfer is difficult to achieve, best candidate interventions are to practice at the edge of one's ability and make many mistakes, evaluate mistakes after one has made them, learn from training programs modeled after expert tacit knowledge, and talk about on one's strategies when practicing the domain. When learning, one would like to progress faster, and learn things faster. So it makes sense to search for interventions that speed up learning (effective learning techniques), enable using knowledge and knowledge patterns from one learned domain in a new domain if appropriate (transfer learning), and make it easier to find further learning-accelerating techniques (meta-learning). Summary I've spent ~20 hours reading and skimming papers and parts of books from different fields, and extracting the results from them, resulting spreadsheet here, google doc with notes here. I've looked at 50 papers, skimmed 20 and read 10 papers and 20% of a book. In this text I've included all sufficiently-different interventions I've found that have been tested empirically. For interventions tried by scientists I'd classify them into (ordered by how relevant and effective I think they are): Error-based learning in which trainees deliberately seek out situations in which they make mistakes. This has medium to large effect sizes at far transfer. Long Training Programs: These usually take the form of one- or two-semester long classes on decision-making, basic statistics and spatial thinking, and produce far transfer at small to medium effect sizes. Such programs take a semester or two and are usually tested on high-school students or university students. Effective Learning Techniques: Things like doing tests and exercises while learning, or letting learners generate causal mechanisms, which produce zero to or best small amounts of far transfer but speed up learning. OODA-loop-likes: Methods that structure the problem-solving process, such as the Pólya method or DMAIC. In most cases, these haven't been tested well or at all, but they are popular in the business context. Also they look all the same to me, but probably have the advantage of functioning as checklists when performing a task. Transfer Within Domains: Methods that are supposed to help with getting knowledge about a particular domain from an expert to a trainee, or from training to application on the job. Those methods have a high fixed cost since experts have to be interviewed and whole curricula have to be created, but they work very well at the task they've been created for (where training sometimes is sped up by more than an order of magnitude). Additionally, most of the research is on subjects which are probably not intrinsically motivated to apply a technique well (i.e. high school students, military trainees, and university students), so there is a bunch of selection pressure on techniques which still work with demotivated subjects. I expect that many techniques work much better with already motivated subjects, especially ones that are easy to goodhart. In general, the tension I was observing is that industry and the military are the ones who perform well/do non-fake things, but academia are the ones who actually measure and report those measures to the public. From when I've talked with people from industry, they don't seem at all interested in tracking per-employee performance (e.g. Google isn't running RCTs on their engineers to increase their coding performance, and estimates for how long projects will take are not tracked & scored). I also haven't seen many studies quantifying the individual performance of employees, especially high-earning white collar knowledge-workers. Recomme...
  continue reading

1657 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida