My first book, Inside Deep Learning: Math, Algorithms, Models, can be found on Manning or Amazon. The physical copy of the book always comes with a digital copy. 

I wrote this book while developing a class on deep learning at UMBC. My goal was to think back to when I first started in the field. I wasn't particularly good at any math and was struggling to learn about deep learning on my own. "What do I wish I had then to learn what I know now?" was the guiding thought behind the book. I'm really delighted by the number of emails and messages, especially from practitioners I've received over the years, who said that this was the book that made things click. I struggled hard to make a balance between code and math and tie them together at the right depth - and I think it came out quite well. 

It is one of Manning's only full-color prints because it is chocked full of diagrams and text-equation coding to help explain and remove the scariness of the math! I've been delighted with students, practitioners, and even Professors reaching out and w/ how much they appreciate the content! Below are two examples of the kinds of figures/explanatory diagrams I've tried to include regularly. Especially the color-coded equations, which took me about an hour each to make. 
Deep learning is also a challenging field to get into due to the cost requirements. You generally need a GPU, at least $500, to do much of anything in the field. Thats a steep price to pay if you aren't even sure you like DL yet. And that $500 could easily become $3000 if your machine isn't easily upgrdable/that isn't in your skill set. For this reason I carefully designed everything to run within the free tier of Google's Colab environments, which provide a GPU, so that you can learn DL before investing heavily in it. Though the $10/50 month subscriptions do help and are nice (I use the $50/month myself personally, not an add). 
Finally, the content I've tried to make covers all the kinds of practical knowledge I would like an ideal employee to know starting fresh in their careers. The range of content in this book is pretty significant, and I think it has held up quite well even in such a fast-moving field like Deep Learning. No, GPT isn't in it, but the foundations to it, like Transformers, gradient descent, and positional encodings, are! Indeed, the last chapter of the book is about coming full circle and showing the reader how much they've learned to understand (at the time) the very latest in new research. If you've made it through the book, you'll be in a good space to learn the newer things that keep coming out more quickly. 
This is an example table from the book conveying the practical knowledge that I've developed over years of work on when to use which techniques.