AI Resources

Landmark papers (chronological)

  1. “Computing Machinery and Intelligence” – Alan M. Turing (1950)
    What it says: Frames the now‑famous Imitation Game (Turing Test) to recast “Can machines think?” as an empirical research question. New Home 2024
    Why it matters: Gave AI its first rigorous testable criterion of intelligence and introduced many still‑relevant objections and replies.
    Access: DOI 10.1093/mind/LIX.236.433
  2. “A Logical Calculus of the Ideas Immanent in Nervous Activity” – W. S. McCulloch & W. Pitts (1943)
    What it says: Proves that simple binary “formal neurons” are functionally equivalent to Turing machines, establishing computationalism. California State University Long Beach
    Why it matters: Laid the mathematical groundwork for neural networks and inspired later perceptron work.
    Access: Reprinted in many collections; original Bulletin of Mathematical Biophysics article.
  3. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain” – Frank Rosenblatt (1958)
    What it says: Describes a learning rule for a single‑layer neural classifier implemented in hardware or software. homepages.math.uic.edu
    Why it matters: First learning algorithm tested on real sensors; sparked decades‑long debate about connectionism versus symbolism.
  4. “Learning Representations by Back‑Propagating Errors” – D. E. Rumelhart, G. E. Hinton & R. J. Williams (1986)
    What it says: Introduces the general back‑propagation algorithm for multi‑layer nets, enabling end‑to‑end gradient learning. GitHub
    Why it matters: Revived neural nets after the “AI winter” by showing they could learn useful internal features.
  5. “ImageNet Classification with Deep Convolutional Neural Networks” – A. Krizhevsky, I. Sutskever & G. Hinton (2012)
    What it says: Demonstrates that deep CNNs trained on GPUs smash ImageNet error rates using ReLUs, dropout and data augmentation. NeurIPS Papers
    Why it matters: Triggered the modern deep‑learning boom across vision and beyond.
  6. “Playing Atari with Deep Reinforcement Learning” – V. Mnih et al. (2013)
    What it says: Coupling Q‑learning with convolutional nets, the DQN agent learns directly from pixels to surpass human Atari performance. arXiv
    Why it matters: Showed deep learning could scale to sequential decision‑making and sparked today’s DRL research wave.
  7. “Generative Adversarial Nets” – I. Goodfellow et al. (2014)
    What it says: Poses generation as a two‑player minimax game between a generator and discriminator network. arXiv
    Why it matters: Opened a new paradigm for realistic image, audio and data synthesis and for adversarial training more broadly.
  8. “Mastering the Game of Go with Deep Neural Networks and Tree Search” – D. Silver et al. (2016)
    What it says: Combines policy/value networks with Monte‑Carlo tree search to create AlphaGo, defeating world champions. Google Research
    Why it matters: Landmark proof that deep RL plus search can conquer a domain long considered out of reach for AI.
  9. “Attention Is All You Need” – A. Vaswani et al. (2017)
    What it says: Introduces the Transformer architecture, replacing recurrence with multi‑head self‑attention for sequence transduction. arXiv
    Why it matters: Forms the backbone of today’s large language models and many vision/audio architectures.
  10. “BERT: Pre‑training of Deep Bidirectional Transformers for Language Understanding” – J. Devlin et al. (2018)
    What it says: Presents masked‑language‑model pre‑training followed by task‑specific fine‑tuning, achieving SOTA on 11 NLP benchmarks. arXiv
    Why it matters: Popularised foundation‑model pre‑train/finetune workflows and sparked a new scaling race in NLP.

YouTube Channels

  • MIT 6.S191 – Introduction to Deep Learning: Annual MIT short course (all lectures free) covering fundamentals, transformers, diffusion models, and applied labs—ideal structured curriculum.
  • StatQuest with Josh Starmer: Crystal‑clear statistics and ML math explanations—breaks down derivations like back‑prop or logistic regression step by step. Great primer before diving into code.
  • DeepLearning.AI – founded by Andrew Ng: University‑style lecture shorts, guest talks, and full Coursera course videos on ML, NLP, RL and prompt engineering. Great balance of theory and practical coding demos.
  • Henry AI Labs: Research‑focused channel summarising papers, benchmarks and ethical debates; useful for grasping current trends quickly.
  • Retured: Retured serves bite‑size AI wisdom in under ten minutes. From 60‑second Shorts to 5‑‑10 min deep‑dives, we swap heavy equations for plain‑English stories, brain‑science analogies, and real‑world demos. Subscribe for quick, brain‑friendly AI that fits your coffee break — no math degree required.

Retured – Transforming Expertise into AI‑Driven Impact

Bridging the gulf between deep domain know‑how and fast‑moving technology, Retured equips professionals, students and organisations to build actionable AI solutions with clarity, confidence and creativity. Businesses are racing to embed AI, yet most still struggle to turn vision into working products — only a minority report organisation‑wide adoption. At the same time, 57 % of employees say they want structured AI upskilling from their employers. Retured’s four‑pillar model meets that demand head‑on:

  1. Mentorship for Domain Experts: One‑to‑one or small‑group guidance turns specialist insight into a clear product roadmap, letting the mentee retain execution ownership. Effective mentorship is a proven accelerator for capability building in knowledge‑heavy sectors.
  2. Project‑Based Learning for Students: We partner with universities to run outcome‑driven AI and neuroscience projects. Meta‑analysis shows project‑based learning significantly boosts academic performance, while active‑learning techniques double long‑term retention.
  3. Customized Corporate Training: Modular, use‑case‑driven workshops help enterprises and public agencies apply AI to real problems—financial analysis, operational optimisation, knowledge automation—delivering measurable ROI within existing workflows. Generative‑AI case studies in finance and manufacturing illustrate the gains possible when training is aligned to immediate needs.
  4. Applied Research & Innovation
    Our team explores the frontier where human cognition meets machine intelligence, from LLM‑powered automation to neuro‑inspired architectures. Advances in neuromorphic and LLM technology are redefining what organisations can automate next.

Why Retured? We focus on why and how, not just what: guiding experts to productise ideas, giving students portfolio‑ready projects, and enabling teams to deploy cutting‑edge AI with minimal disruption—turning minutes of learning into mastery and genuine impact.