Coding Blocks

A Software How-ToTechnology and Education podcast featuring Joe Zack
 14 people rated this podcast
Pragmatic talk about software design best practices: design patterns, software architecture, coding for performance, object oriented programming, database design and implementation, tips, tricks and a whole lot more.

You'll be exposed to broad areas of information as well as deep dives into the guts of a programming language. Most topics discussed are relevant in any number of Object Oriented programming languages such as C#, Java, Ruby, PHP, etc.. All three of us are full stack web and database / software engineers so we discuss Javascript, HTML, SQL and a full spectrum of technologies and are open to any suggestions anyone might have for a topic. So please join us, subscribe, and invite your computer programming friends to come along for the ride.

Recent Episodes

97. Data Structures – (some) Trees
97. Data Structures – (some) Trees
We ring in 2019 with a discussion of various trees as Allen questions when should you abstract while Michael and Joe introduce us to the Groot Tree. Are you reading these notes via your podcast player? You can find the full show notes for this episode and join the conversation at Sponsors O’Reilly Software Architecture Conference – Microservices, domain-driven design, and more. The O’Reilly Software Architecture Conference covers the skills and tools every software architect needs. Use the code BLOCKS during registration to get 20% off of most passes. – a digital space for bot developers and enthusiasts of all skill levels to learn from one another, share stories, and move the conversation forward. New to bots? Get started with their Beginner’s Guide. Clubhouse – The first project management platform for software development that brings everyone on every team together to build better products. Sign up for two free months of Clubhouse by visiting Survey Says Anonymous VoteSign in with WordpressHow good were the holidays to you?Awesome. Got everything I hoped for.I got things I didn't even ask for.It was good. I got everything I purchased.Glad to be back at work.I didn't get   News Thank you to everyone that left us a review: iTunes: SoundShop, TimGraf, bothzoli Stitcher: KSprings, Trickermand, Delden, Flo_ Is it really always better to code to an interface? I am Groot Why do trees matter? Trees are one of the most important data structures, because there are certain types of problems that are really difficult to solve without them and they are really efficient for certain tasks. They are also some of the most difficult for programmers to deal with, because you often need recursive algorithms to work with them. Recursive solutions are neat, but can easily lead to stack overflows if you are not careful (especially if your language doesn’t support tail recursion). Many traditional persistence layers are not setup to easily store tree data. Why programmers use trees: Model “real world” hierarchical data. Memory/Processor efficient. “Good enough” solutions. Common uses of Trees Manipulate hierarchical data (persistence, middleware, UI): Directory structure on your computer. Product categories on an eCommerce site. Interview questions. Data files (XML, HTML, JSON). Search. Manipulate sorted lists of data. As a workflow for compositing digital images for visual effects. Router algorithms. There are lots of Trees The basic structure of a tree is very simple, in most cases you can imagine a wrapper node object that has a value (your data) and a set of children nodes. There are other ways to save trees, like in an array but you’ll generally see those used in special cases like heaps or when you are serializing data. … that’s it! If you’re looking at a file system, then the value would contain the current path and other metaData (like permissions) about that path, the children of that node would contain other nodes Despite the simple example, Wikipedia lists 115 types of trees against 7 categories! (and 20 types of arrays!) The many types of tree categories: Binary Trees B-Trees Heaps Trees (Value Trees) Multiway trees Space-Partitioning Trees Application-Specific Trees How can you have so many types for such a simple concept as value + children? Constraints on the data structure (binary vs n-array). Stored meta data (red black). Rules you use for interacting with the trees (heaps). These variations have big downstream effects. Common terminology Although the data structure itself is simple, there is a lot of data that you can extrapolate from about the entire tree. Algorithms use this data in addition to the data stored in the nodes. The basics: Root is the topmost node of the tree. Edge is the link between two nodes. Child is a node that has a parent node. Parent is a node that has an edge to a child node. Leaf is a node that does not have a child node in the tree. Height is the length of the longest path to a leaf. Depth is the length of the path to its root. For example, if you are dealing with a really large tree, then you may opt to return a “good enough” solution that doesn’t exceed a certain depth. If you’re performing a lot of searches on a tree, then it may make sense for you to optimize the tree as you insert and delete nodes so that your searches can check a consistently small set of nodes. The point is, even if the data structure is small – there are big time implications, and the recursive algorithms you write can be … mind bendy. Binary Tree What is it? Unlike arrays, lists, queues, stacks, etc., trees are not linear data structures, rather they are hierarchical. A tree whose elements have at most two child elements, usually referred to as left and right. How does it work? Every tree has a root node – that’s the top most node in the tree – it has no parent. Nodes or elements that have no children are called leaf nodes. Nodes are referred to as either a parent or a child. Types of binary trees: Full Binary Tree – every node has 0 or 2 children. Complete Binary Tree – every level is completely filled with nodes except the last level could have every node filled except one right node. Binary heap is an example of this. Perfect Binary Tree – all internal nodes have two children, and all leaf nodes are at the same level. Balanced Binary Tree – height of the tree is O(log n) where n is the number of nodes. Great for performance because they have O(log n) time for search, inserts and deletes. Degenerate or Pathological Tree – each node has exactly one child (until you get to the leaf node). Basically no different than a linked list. Pros When ordered, can provide a sped up search over linked lists, but still slower than an ordered array. Can be performant on insert/deletes into the tree – faster than inserting into the middle of an array, but slower than a linked list. No limit on the number of nodes that can be added – unlike an array – this is because each node contains pointers to its child and parent nodes. Cons Extra memory, not as efficient as arrays When to use? You need to do comparisons and you’re not sure of how many you need, or there will be many inserts/deletes. B-Trees What is it? It’s a self-balancing search tree. AVL Tree – (Adelson-Velsky and Landis) Primary idea behind a b-tree is reducing the number of times the disk is accessed – keep as much in memory as possible. Most operations for search, insert, delete, max, min – require O(h) disk accesses, where h is the height of the tree. This is achieved by creating a “fat tree” – this means that the tree is short but wide, and this is accomplished by putting as many keys as possible in a single b-tree node. Typically the b-tree node is the same size as the block size on the disk. All leaf nodes are at the same level. Minimum degree “t” – t depends on the block size of the disk. EVERY node must contain t-1 keys. Root may contain at minimum 1 key. All nodes, including root, may have at most 2t – 1 keys. Number of children of a node are equal to the number of keys in that node + 1. All the keys are sorted in order within the node. B-trees grow and shrink from the root node. Unlike a binary search tree that grows and shrinks vertically (downward). Time complexity to search, insert and delete are O(log n). How does it work? Search – similar to a binary search tree: If you’re searching for a key, you recurse down the nodes. If the non-leaf node has the key, then the node itself is returned. If the non-leaf node doesn’t have the key, then the child node that is the child of the first key with a greater value in the current node is traversed. If you make it all the way to a leaf node, and the key isn’t found, NULL is returned. Insert: Always starts at a leaf node. Do a binary search to find the node where the insert should happen – if there is room, then just insert the new key in that node and make sure the keys stay ordered. If the node is full … Evenly split the node into two nodes. Choose a median from the elements being split – anything lower than the median goes to the new left child node, and anything greater goes to the new right node. The median goes into the parent node, and then splitting can keep going up the tree. Pros Better, more consistent search times than binary search because of the balancing. There are self-balancing binary trees (like red-black trees) but are optimized for batch inserts. Cons If you’ve met the max degree, then you need to re-balance the tree which can make inserts slower. When to use? B-tree is well suited for storage systems that read and write relatively large blocks of data, such as disks. Commonly used in databases and file systems – because the aim is to reduce the number of disk reads necessary. Resources We Like The Imposter’s Handbook ( List of data structures (Wikipedia) What is tail recursion? (Stack Overflow) Tree (data structure) (Wikipedia) Everything you need to know about tree data structures (freeCodeCamp) Binary tree (Wikipedia) Binary search tree (Wikipedia) Binary Search Tree (GeeksforGeeks) B-tree (Wikipedia) B-Tree | Set 1 (Introduction) (GeeksforGeeks) AVL tree (Wikipedia) B-Trees visualization (USFCA) Red-black tree (Wikipedia) B+ tree (Wikipedia) Tip of the Week Use PlantUML to create plain text UML diagrams. There’s also a VS Code extension for it. Thanks bothzoli! Walkthrough: Debugging a Parallel Application in Visual Studio (Visual Studio Docs) In SQL Server Management Studio, when executing multiple SELECT statements, the record count in the bottom right hand corner shows the aggregate count for all queries. Click on a specific result set to see the individual count for a particular query. When using OS X/macOS, remember that a lot of functionality might be hidden. Use the OPTION key to see additional functionality.
96. Data Structures – Hashtable vs Dictionary
96. Data Structures – Hashtable vs Dictionary
Just in time to help you spread some cheer this holiday season, the dad jokes are back as we dig into the details of hash tables and dictionaries. Hey you. Yes you. Are you reading this via your podcast player? You can find this episode’s full show notes and be a part of the conversation by visiting Sponsors Manning Publications – Sign up to Manning’s Deal of the Day (here) to participate in the Countdown to 2019 to be in the running to win awesome prizes and get special daily discounts. – Sign up today for a free 14 day trial and get a free Datadog t-shirt after creating your first dashboard. Survey Says Anonymous VoteSign in with WordpressHow do you plan to spend your time off this holiday season?Spending time with the family because the holidays are all about the memories.I&#39;m not avoiding the family, I&#39;m building my next great project.Escaping the family _and_ the keyboard. What time off?vote   News Thank you to everyone that took a moment to make our holidays a little bit brighter by leaving a review: iTunes: krauseling, Jla115, Ross44ross, hutchybong, JonMabale, code_in_my_robe, Saltoshi, SXENedger, BubbaCow Stitcher: Järvinen, EliavCodeIsrael, sbgood, maxim_bendiks, Nathan Iyer Thanks to chumak84 for the clarification of string interning. (Wikipedia) We play a little game of How much data is generated every minute by use? taken from the data provided by Domo’s Data Never Sleeps 5.0 infographic. (Domo) Hashtables vs Dictionary Hashtable About hash tables When it comes to reading data, arrays are great (and fast) because getting to the next element is just a matter of math (i.e. the offset is the size of the type of thing in the array). But when it comes to writing, or more specifically inserting data, linked lists are great because we only need adjust some pointers. A hash table takes the best of these two worlds and smashes them together … sometimes The MSDN documentation succinctly describes the Hashtable class as _Represents a collection of key/value pairs that are organized based on the hash code of the key._ How do they work? Consider the core structure to the hash table to be an array. Each element contains an object that contains the key and the value(s). In the Wikipedia article for hash table, this object is called a bucket. Bucket?! The data that you want to use as the key to read and/or write is used to create a hash that serves as the index to the array. Implementation can vary based on how collisions are handled (i.e. keys that produce the same hash). Popular Collision Resolution Strategies Separate Chaining In this implementation, the elements in the array are pointers to a linked list of buckets. During a collision, you traverse the list looking for your key. A good hash table will rarely have more than 3 items at the same array index, so the list traversal is small. Open Addressing In this implementation, the elements in the array are the buckets themselves. The hash value serves as a starting point. Meaning that during a write operation, if we go to that index in the array, and something is already there, we will continue on to the next element in the array looking for an empty slot to write our value. And for a read operation, we will similarly start at the index of the hash and look for the key we’re looking for and continue until we either find it or get to an empty slot in the array. Other strategies include: Cuckoo hashing Hopscotch hashing Robin Hood 2-choice hashing As the name implies, two hashing functions are used. During a write operation, the new key/value pair is written to the array’s index location that has the fewest objects already at each hash value. Complexity Space: Average and Worst case = O(n) Search, Insert, and Delete: Average = O(1) and Worst case = O(n) Pros Speed. Hash tables are fast. Reads are generally O(1) (ignoring collisions). If there is a collision, the read/write time _can be_ reduced to O(n/k) where k is the size of the hash table, which can be reduced to just O(n). Assuming a good hashing algorithm is used, performance will usually be O(1). But this assumes that by “good”, the performance of the hashing algorithm is considered … i.e. a slow algorithm might still be O(1) but be slower than alternatives. Cons Depending on language, looking at you C#, the Hashtable type is loosely typed. The cost of the hashing function can be more than just looping over the list, specially for few entries. Because hash table entries are spread around, there is poor “locality of reference” and can trigger processor cache misses. Cache performance can also be poor or ineffective depending on implementations, such as separate chaining. Performance degrades when there are many collisions. When to use If we’re talking language specific implementations of the Hashtable _type_ … Um. Don’t? At least, not in C#? Prefer the Dictionary type. And um, not in Java either. Use HashMap. In more general terms, use them when … You need an associative array … i.e. you want to access the array by some key, like a name, rather than by index. Database indexing Cache Sets Dictionary What is it? And how does it work? Like a hash table, dictionary holds key/value pairs. By definition hash tables are weakly typed, but dictionaries are not Hashtable numbers = new Hashtable(); Dictionary<int, string> dictionary = new Dictionary<int, string >(); C# Detail, Hashtable and Dictionary use a different collision strategy. Hashtable uses a rehash strategy, while Dictionary utilizes “chaining”. Chaining is using a secondary data structure (sparse array) rather than re-hashing. Chaining is…complicated, and there are different methods for doing it (separate chaining vs open-addressing). Wait, but didn’t we just talk about how adding and removing items from an array takes time? Doesn’t that mean that chaining will be slower than a normal hash lookup? What about memory size? Sounds like it will potentially be double? Nah, it’s fine. Chaining ends up running in O(1) on average time as well, because of the implementation details of chaining. Weird math ahead … Namely that the length of the list can never exceed the bucket size. This keeps the “load factor” (ratio of nodes to buckets) low so the average lookup time ends up being known as the separate chaining ends up being O(n/m) and since n/m are equal, you get 1 … on average. Pros Same pros as hash tables. In static languages, dictionary are more efficient because there is no boxing necessary. Operations are safer, because errors are caught at compile time. Cons Same cons as hash tables. When to use Whenever you need a hash table like data structure, but want type safety. Fast insert, delete, lookup – sparse data. What about.. Hashes and Dictionaries technically don’t support ordering, so what the heck is IOrderedDictionary? Dictionary vs Hashtable in C# Hashtable uses the type object as both the key and value. Meaning they are loosely typed. This also means value types like int get boxed/unboxed during use (see episode 2). Dictionaryon the other hand is strongly typed. So the key and value types are explicitly defined in the code. This favors failing at compile time rather than run time. This is a valid use of a Hashtable in C#:using System; using System.Collections; public class Program { public static void Main() { var h = new Hashtable(); h.Add("foo", "bar"); h.Add(1, 2); foreach(var key in h.Keys) { Console.WriteLine(h[key]); } } } Closing Thoughts C#’s Dictionary is like Java’s Hashtable. In C# – Dictionary<TKey,TValue> In Java – Hashtable<K,V> In fact, in Java, Hashtable extends Dictionary (abstract) Although you’re supposed to use HashMap instead. And a Javascript Object can be used like a dictionary (aka associative array). Resources We Like The Imposter’s Handbook ( List of data structures (Wikipedia) Big-O Complexity Chart ( Hash table (Wikipedia) Hashtable Class ( Dictionary<TKey, TValue> Class ( Class Hashtable<K,V> ( Back to basics: Dictionary part 1, hash tables (Mark Vincze’s coding blog) Back to basics: Dictionary part 1, hash tables – Separate chaining (Mark Vincze’s coding blog) An Extensive Examination of Data Structures Using C# 2.0 ( Why is Dictionary preferred over Hashtable? (Stack Overflow) What’s the difference between Hashtable and Dictionary? (Stack Overflow) When to use a HashTable (Stack Overflow) Data Structures and Algorithms: When To Use What? (theGarbageCollector) 2. Boxing and Unboxing in .NET (Coding Blocks Episode 2) Tip of the Week Join the Slack community and join in on all of the fun in #pet-pictures. Visualize your time series data using Grafana. (Grafana Labs) Add emojis to your file (_never_ in your code!) in Visual Studio Code by using WIN+; Thanks AngryZoot! (Twitter) Windows + ; lets you put emoji in your VS Code. — The Real Groot. (@AngryZoot) December 3, 2018
95. Data Structures – Arrays and Array-ish
95. Data Structures – Arrays and Array-ish
We continue our deep dive into data structures, this time focusing in on arrays and array-like types as Allen gives Shania Twain some singing competition, Joe is going to owe us another tattoo, and wait … when does Michael think C++ was invented? How are you reading this? If you’re using your podcast player to read these show notes, you can visit to view this show’s full show notes and be a part of the discussion. Sponsors – Sign up today for a free 14 day trial and get a free Datadog t-shirt after creating your first dashboard. Manning Publications – Sign up to Manning’s Deal of the Day (here) to participate in the Countdown to 2019 to be in the running to win awesome prizes and get special daily discounts. – a digital space for bot developers and enthusiasts of all skill levels to learn from one another, share stories, and move the conversation forward. New to bots? Get started with their Beginner’s Guide. Survey Says Anonymous VoteSign in with WordpressWhat do you want to focus on improving in 2019?Front-End, there&#39;s a 3p service for everything now.Back-End, cuz Flexbox...done.Persistence is king, data data data.Algorithms and data structures, can&#39;t go wrong with fundamentals.Clean Code, master the tactical before the strategic.Architecture, I&#39;ve ascended to higher levels of abstraction.DevOps, good luck doing anything without me!vote   News We take a moment to say thank you to everyone that took time out of their busy schedules to leave us a review: iTunes: Mr_Joe1, Scweiss1, akaryrye, JacobCastiglioni Stitcher: kannan, WeirdFlexButOK How do you get other people within a company to get on board with writing better code? Is it megabits? Or megabytes? Arrays + Cousins Array What is it? And how does it work? A way to store a collection of items. In C, these arrays are stored in a continuous block of memory. You either have to declare the array type and size or you have to initialize the array with the elements at the time the type is defined, or both. Example initializations: int myArray[10]; int myArray[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} int myArray[10] = {1, 2, 3} No index checking in C – ever get that index out of bounds error in our other languages? So what happens? It doesn’t blow up – you’ll just get unexpected / garbage results. C also won’t blow up if you initialize it with more data that what’s in the defined size – C++, C#, Java, etc will complain at compile time. In languages like C, sometimes people get confused between arrays and pointers. It’s true that arrays start at a spot in memory and then contain the contiguous space in memory, but it’s not the same thing as having a pointer to that first memory location. You can check by doing a sizeof in C to see that they are indeed different. Confusion typically happens because arrays are passed as pointers in C and you access the array members using pointer arithmetic. C arrays can contain anything except void or functions. Can be initiated in any of three memory segments – data, heap and stack. Dynamically allocated arrays go on the heap. What’s a dynamically allocated array? One that’s created using malloc – allocates a chunk of space on the heap by defining how much space you need. Static / Global arrays get allocated to the data segment. Local arrays are put on the stack. What about JavaScript? Things are loosely enforced at best. You don’t define a length or a type. Technically, it’s nothing more than a special object behind the scenes that use numerical keys for sorting purposes. Because it’s an object, it is copied by reference. Don’t believe us, just put this into your console typeof [] If using the array (without breaking the rules) there are built in optimizations like storing the data in contiguous memory spaces. What are examples of _breaking the rules_? Manually set a property on an array either with dot notation or using a non-numerical key. Leave major gaps in the indices. Reverse adding items to the array from largest index to smallest. Add items to the array ordered, and life will be great Push / Pop are fast, but Shift / Unshift are costly. Use for .. of Technically can use for .. in because it’s an object, but not a good idea. 10 – 100x slower, AND you can get additional items that you didn’t expect. Length isn’t really length…it’s the largest numbered index + 1 let myArr = []; myArr[200] = true; // myArr.length === 201; Length is writeable – what?! Yeah, you can increase it, and nothing majorly bad happens … but if you decrease it, it’ll truncate data in the array above the specified length. More about Arrays In a strongly typed language, these items must be of the same type. In loosely typed languages, typically you can store whatever you’d like in the collection. In most typed languages, an array is a fixed sized sequential set of data – typically stored in contiguous memory. In scripting languages like JavaScript, the memory allocation may not be contiguous – rather it’s an implementation detail of the engine itself. When is an array not an array? Arrays generally have fixed sizes because the allocated memory needs to be contiguous. The size can be dynamically set at run time but you generally can’t grow it without moving it. Ruby has dynamic arrays, which are allocated dynamically, and contiguous…but will automatically reallocate more space as needed – copying all of the existing data to a new spot and allocating more space. Python doesn’t have arrays, it has lists which are not fixed and are not contiguous. JavaScript has “arrays” which are objects, with properties like “length” that are saved properties on the object. “In V8 and Carakan (and presumably Chakra), all (non-host) objects (both those that are arrays and those that aren’t) with properties whose names are array indexes (as defined in ES5) are stored as either a dense array (a C array containing some value wrapper) or a sparse array (which is implemented as a binary search tree).” When is a JS array not an array? When you do weird stuff! Sizes JavaScript: Maximum Size: 2^31 (unsigned 32 bit integer ~ 4.29 billion elements) C#: Maximum Size: By default, the maximum size of an Array is 2 gigabytes (GB). In a 64-bit environment, you can avoid the size restriction by setting the enabled attribute of the gcAllowVeryLargeObjects configuration element to true in the run-time environment. However, the array will still be limited to a total of 4 billion elements, and to a maximum index of 0X7FEFFFFF in any given dimension (0X7FFFFFC7 for byte arrays and arrays of single-byte structures). Java: Maximum Size: 2GB -> 2^31 When to use In typed languages, you’ll probably want to use this if trying to squeeze out every bit of performance available. Otherwise, using the built in List types are typically more flexible and powerful. Linked List What is it? And how does it work? Ordered set of objects that contain references the next item in a list, like a conga line. There are also double-linked lists which contain a reference to the previous object. What’s the deal with double? You have the additional overhead of setting previous and next values. Advantage is that it’s easy to drop a node, given the node (i.e.: = (rather than iterating through the list to find the node‘s prev). Pros Don’t need to specify a size up front. No over-allocating. No need for contiguous memory. Easy to insert/delete, think about removing an item from a sorted array. Cons No random access. You need to iterate through the list to get to any particular object. Terrible choice for some algorithms, like binary search. Extra memory required, one pointer per object. When to use When you have lots of insert/deletes. When you don’t need random access. When you don’t know a fixed size. What about … Languages that don’t have pointers? Use a reference: var node2 = { value: 5, next: node1 } C#? LinkedList class …isn’t it trivial? Why would you ever need that? Nice to have helpers like AddFirst, AddLast, RemoveLast, AddAfter, etc. LinkedList vs List – List is actually backed by an array, and LinkedList can cheaply add/remove items out of the middle of the list, List can only cheaply add to the end. Queue What is it? And how does it work? Very similar to a Stack … the biggest difference is the order in which data is removed. Queues are FIFO’s – First In First Out – oldest items get removed first. FIFO Bottles! Think about your school lunch line – you got in line first, you got served first and then you left the line. Useful When Ordering is important via the FIFO. Breadth first searches. Resource sharing among multiple consumers – you basically go to your first consumer in line, 2nd, etc. Asynchronous data syncing – data not received at the same rate it is sent. Derivatives Priority Queues Similar to the standard queue with the exception that queue items have a priority associated with them, and the higher priority items are dequeued first. If the priorities are the same, then the queue falls back to standard operating mode FIFO. Useful When Efficient for finding shortest path in Dijkstra’s algorithm. Data compression (Huffman). AI Deque – Double Ended Queue Allows insert and delete at both ends. This essentially makes a queue a hybrid of a stack and a queue. Useful When Handling clockwise and counter-clockwise rotations in O(1) time. Efficient at adding or removing items from both ends of the queue. Circular Queue Also known as a Ring Buffer. Last item in the queue links back to the first item. Seeing if a queue is full – if the first item -1 == last item, then the queue is full. Useful When Used for memory management – allows you to fully utilize memory locations that might have been skipped otherwise. Traffic systems – constantly cycle through and switch on the next traffic light on a timer basis. CPU scheduling. Stack What is it? And how does it work? A stack is a “last in, first out” data structure (LIFO). Think of anything you’ve ever stacked … a stack of plates, a stack of books, a stack of pancakes … you don’t have random access but instead access the top most item on the stack. Stacks have two principle operations, push and pop, and often a peek operation. push – put a new element at the top of the stack. pop – remove the top most element from the stack. peek – see the top most element on the stack without removing it. Stacks can be implemented using at its core either an array of a linked list Technically only a singly linked list is necessary … i.e. we only need to point to the next item in the stack (the one below), because we’re only going to traverse down the stack. Pros Fast way to know where you’ve been. Reads/writes are O(1) since you are only touching the top of the stack. Cons In some languages, stacks are used to store both input data as well as return addresses of the calls making them susceptible to stack smashing attacks when the input sizes aren’t verified. This is no random access to a stack … it’s last in, first out. When to use Backtracking while traversing a graph or tree. Remember depth-first search. Reversing things like a string (or any other array/list like thing). Call stack. Writing your own compiler … stacks are often used to ensure opening/closing constructs like curly braces, parenthesis, etc. are balanced. Resources We Like The Imposter’s Handbook ( List of data structures (Wikipedia) JavaScript data types and data structures (MDN web docs) Arrays in C/C++ (GeeksforGeeks) Arrays in C Langugage | Set 2 (Properties) (GeeksforGeeks) Arrays in C (Swarthmore College) Arrays ( ECMA Script Array [[DefineOwnProperty]] (P, Desc, Throw) (ECMA International) Performance of Arrays vs. Lists (Stack Overflow) List vs Array performance example (IDE GeekforGeeks) Are javascript Arrays actually implemented as arrays? (Stack Overflow) ECMA Script ArrayCreate (length, [, proto]) (ECMA International) LinkedList<T> Class ( Queue Data Structure (GeeksforGeeks) Applications of Priority Queue (GeeksforGeeks) Why do we need Deque data structures in the real world? (Stack Overflow) What is the daily life example of a circular queue? (Quora) Stack (abstract data type) (Wikipedia) Data Structures and Algorithms: When To Use What? (theGarbageCollector) Abstract Data Types: Stack, Queue, Priority Queue (theGarbageCollector) Data segment (Wikipedia) Tip of the Week Want to share a tip? Submit your tip at Use npm ci for faster, more reliable builds. (The NPM Blog) Thanks gaprogman! doFactory .NET Design Pattern Framework 4.5 (doFactory) Thanks JoeRecursionJoe! BenchmarkDotNet – Powerful .NET library for benchmarking. (GitHub) Use git subtree to merge repositories together or split portions out into new repositories all while maintaining the history. (GitHub) Thanks GreenFieldCoder! – All of the documentation for everything you’ll ever use in one place. Thanks Konrad!

Who's On This Podcast

212 episodes

What People Are Saying

Mentioned In These Lists

The .NET Core PodcastMS Dev ShowComplete Developer PodcastThe Cynical DeveloperCoding Blocks
Curated list of11 podcasts
ToolsdayDeveloper TeaComplete Developer PodcastCoding BlocksCodeNewbie
Curated list of7 podcasts
The Hello World PodcastDevSecOps DaysAll Things GitWeekly Dev TipsCoding Blocks - Patterns, Architecture, Best Practices, Tips and Tricks for Software, Database, and Web Developers / Engineers
Curated list of11 podcasts
Rate Podcast
Podcast Details
Sep 9th, 2013
Latest Episode
Dec 17th, 2018
Release Period
No. of Episodes
Avg. Episode Length
About 2 hours
Do you host or manage this podcast?
Claim and edit this page to your liking.