# How to learn Vocabulary efficiently ?

Today’s post is very similar to one of my earlier posts titled “How to teach Algorithms ?“. In that earlier post, I announced an Algorithms App for iPad. Recently, I ported Algorithms App to Mac. Today’s post is about “How to learn Vocabulary efficiently ?”.

This summer, I went to a local bookstore to checkout the vocabulary section. There are several expensive books with limited number of practice tests. I also noticed a box of paper flashcards (with only 300 words) for around $25 !!! After doing some more research, I realized that the existing solutions (to learn english vocabulary) are either too hard to use and/or expensive and/or old-fashioned. So I started building an app with ‘adaptiveness’ and ‘usability’ as primary goals. The result is the Vocabulary App (for iPhone and iPad). Here is a short description of my app. Vocabulary app uses a sophisticated algorithm (based on spaced repetition and Leitner system) to design adaptive multiple-choice vocabulary questions. It is built on a hypergraph of words constructed using lexical cohesion. Learning tasks are divided into small sets of multiple-choice tests designed to help you master basic words before moving on to advanced words. Words that you have the hardest time are selected more frequently. For a fixed word, the correct and wrong answers are selected adaptively giving rise to hundreds of combinations. After each wrong answer, you receive a detailed feedback with the meaning and usage of the underlying word. Works best when used every day. Take a test whenever you have free time. Go ahead and download the Vocabulary App and let me know your feedback/opinion (or) suggest new features. At any given waking moment I spend my time either (1) math monkeying around (or) (2) code monkeying around. During math monkeying phase, I work on math open problems (currently related to directed minors). During code monkeying phase, I work on developing apps (currently Algorithms App, Vocabulary App) or adding new features to my websites TrueShelf or Polytopix. I try to maintain a balance between (1) and (2), subject to the nearest available equipment (a laptop or pen-and-paper). My next post will be on one of my papers (on directed minors) that is nearing completion. Stay tuned. # Open problems for 2014 Wish you all a Very Happy New Year. Here is a list of my 10 favorite open problems for 2014. They belong to several research areas inside discrete mathematics and theoretical computer science. Some of them are baby steps towards resolving much bigger open problems. May this new year shed new light on these open problems. • 2. Optimization : Improve the approximation factor for the undirected graphic TSP. The best known bound is 7/5 by Sebo and Vygen. • 3. Algorithms : Prove that the tree-width of a planar graph can be computed in polynomial time (or) is NP-complete. • 4. Fixed-parameter tractability : Treewidth and Pathwidth are known to be fixed-parameter tractable. Are directed treewidth/DAG-width/Kelly-width (generalizations of treewidth) and directed pathwidth (a generalization of pathwidth) fixed-parameter tractable ? This is a very important problem to understand the algorithmic and structural differences between undirected and directed width parameters. • 5. Space complexity : Is Planar ST-connectvity in logspace ? This is perhaps the most natural special case of the NL vs L problem. Planar ST-connectivity is known to be in $UL \cap coUL$. Recently, Imai, Nakagawa, Pavan, Vinodchandran and Watanabe proved that it can be solved simultaneously in polynomial time and approximately O(√n) space. • 6. Metric embedding : Is the minor-free embedding conjecture true for partial 3-trees (graphs of treewidth 3) ? Minor-free conjecture states that “every minor-free graph can be embedded in $l_1$ with constant distortion. The special case of planar graphs also seems very difficult. I think the special case of partial 3-trees is a very interesting baby step. • 7. Structural graph theory : Characterize pfaffians of tree-width at most 3 (i.e., partial 3-trees). It is a long-standing open problem to give a nice characterization of pfaffians and design a polynomial time algorithm to decide if an input graph is a pfaffian. The special of partial 3-trees is an interesting baby step. • 8. Structural graph theory : Prove that every minimal brick has at least four vertices of degree three. Bricks and braces are defined to better understand pfaffians. The characterization of pfaffian braces is known (more generally characterization of bipartite pfaffians is known). To understand pfaffians, it is important to understand the structure of bricks. Norine,Thomas proved that every minimal brick has at least three vertices of degree three and conjectured that every minimal brick has at least cn vertices of degree three. • 9. Communication Complexity : Improve bounds for the log-rank conjecture. The best known bound is $O(\sqrt{rank})$ • 10. Approximation algorithms : Improve the approximation factor for the uniform sparsest cut problem. The best known factor is $O(\sqrt{logn})$. Here are my conjectures for 2014 :) • Weak Conjecture : at least one of the above 10 problems will be resolved in 2014. • Conjecture : at least five of the above 10 problems will be resolved in 2014. • Strong Conjecture : All of the above 10 problems will be resolved in 2014. Have fun !! # PolyTopix In the last couple of years, I developed some (research) interest in recommendation algorithms and speech synthesis. My interests in these areas are geared towards developing an automated personalized news radio. Almost all of us are interesting in consuming news. In this internet age, there is no dearth of news sources. Often we have too many sources. We tend to “read” news from several sources / news aggregators, spending several hours per week. Most of the time we are simply interested in the top and relevant headlines. PolyTopix is my way of simplifying the process of consuming top and relevant news. The initial prototype is here. The website “reads” several news tweets (collected from different sources) and ordered based on a machine learning algorithm. Users can login and specify their individual interests (and zip code) to narrow down the news. Try PolyTopix let me know your feedback. Here are some upcoming features : • Automatically collect weather news (and local news) based on your location. • Reading more details of most important news. • News will be classified as exciting/sad/happy etc., (based on a machine learning algorithm) and read with the corresponding emotional voice. Essentially PolyTopix is aimed towards a completely automated and personalized news radio, that can “read” news from across the world anytime with one click. ———————————————————————————————————————— # Book Review of “Boosting : Foundations and Algorithms” Following is my review of Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) to appear in the SIGACT book review column soon. —————————————————————————————————————- Book : Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) Reviewer : Shiva Kintali Introduction You have k friends, each one earning a small amount of money (say 100 dollars) every month by buying and selling stocks. One fine evening, at a dinner conversation, they told you their individual “strategies” (after all, they are your friends). Is it possible to “combine” these individual strategies and make million dollars in an year, assuming your initial capital is same as your average friend ? You are managing a group of k “diverse” software engineers each one with only an “above-average” intelligence. Is it possible to build a world-class product using their skills ? The above scenarios give rise to fundamental theoretical questions in machine learning and form the basis of Boosting. As you may know, the goal of machine learning is to build systems that can adapt to their environments and learn from their experience. In the last five decades, machine learning has impacted almost every aspect of our life, for example, computer vision, speech processing, web-search, information retrieval, biology and so on. In fact, it is very hard to name an area that cannot benefit from the theoretical and practical insights of machine learning. The answer to the above mentioned questions is Boosting, an elegant method for driving down the error of the combined classifier by combining a number of weak classifiers. In the last two decades, several variants of Boosting are discovered. All these algorithms come with a set of theoretical guarantees and made a deep practical impact on the advances of machine learning, often providing new explanations for existing prediction algorithms. Boosting : Foundations and Algorithms, written by the inventors of Boosting, deals with variants of AdaBoost, an adaptive boosting method. Here is a quick explanation of the basic version of AdaBoost. AdaBoost makes iterative calls to the base learner. It maintains a distribution over training examples to choose the training sets provided to the base learner on each round. Each training example is assigned a weight, a measure of importance of correctly classifying an example on the current round. Initially, all weights are set equally. On each round, the weights of incorrectly classified examples are increased so that, “hard” examples get successively higher weight. This forces the base learner to focus its attention on the hard example and drive down the generalization errors. AdaBoost is fast and easy to implement and the only parameter to tune is the number of rounds. The actual performance of boosting is dependent on the data. Summary Chapter 1 provides a quick introduction and overview of Boosting algorithms with practical examples. The rest of the book is divided into four major parts. Each part is divided into 3 to 4 chapters. Part I studies the properties and effectiveness of AdaBoost and theoretical aspects of minimizing its training and generalization errors. It is proved that AdaBoost drives the training error down very fast (as a function of the error rates of the weak classifiers) and the generalization error arbitrarily close to zero. Basic theoretical bounds on the generalization error show that AdaBoost overfits, however empirical studies show that AdaBoost does not overfit. To explain this paradox, a margin-based analysis is presented to explain the absence of overfitting. Part II explains several properties of AdaBoost using game-theoretic interpretations. It is shown that the principles of Boosting are very intimately related to the classic min-max theorem of von Neumann. A two-player (the boosting algorithm and the weak learning algorithm) game is considered and it is shown that AdaBoost is a special case of a more general algorithm for playing a repeated game. By reversing the roles of the players, a solution is obtained for the online prediction model thus establishing a connection between Boosting and online learning. Loss minimization is studied and AdaBoost is interpreted as an abstract geometric framework for optimizing a particular objective function. More interestingly, AdaBoost is viewed as a special case of more general methods for optimization of an objective function such as coordinate descent and functional gradient descent. Part III explains several methods of extending AdaBoost to handle classifiers with more than two output classes. AdaBoost.M1, AdaBoost.MH and AdaBoost.MO are presented along with their theoretical analysis and practical applications. RankBoost, an extension of AdaBoost to study ranking problems is studied. Such an algorithm is very useful, for example, to rank webpages based on their relevance to a given query. Part IV is dedicated to advanced theoretical topics. Under certain assumptions, it is proved that AdaBoost can handle noisy-data and converge to the best possible classifier. An optimal boost-by-majority algorithm is presented. This algorithm is then modified to be adaptive leading to an algorithm called BrownBoost. Many examples are given throughout the book to illustrate the empirical performance of the algorithms presented. Every chapter ends with Summary and Bibliography mentioning the related publications. There are well-designed exercises at the end of every chapter. Appendix briefly outlines some required mathematical background. Opinion Boosting book is definitely a very good reference text for researchers in the area of machine learning. If you are new to machine learning, I encourage you to read an introductory machine learning book (for example, Machine Learning by Tom M. Mitchell) to better understand and appreciate the concepts. In terms of being used in a course, a graduate-level machine learning course can be designed from the topics covered in this book. The exercises in the book can be readily used for such a course. Overall this book is a stimulating learning experience. It has provided me new perspectives on theory and practice of several variants of Boosting algorithms. Most of the algorithms in this book are new to me and I had no difficulties following the algorithms and the corresponding theorems. The exercises at the end of every chapter made these topics much more fun to learn. The authors did a very good job compiling different variants of Boosting algorithms and achieved a nice balance between theoretical analysis and practical examples. I highly recommend this book for anyone interested in machine learning. —————————————————————————————————————- # TrueShelf 1.0 One year back (on 6/6/12) I announced a beta version of TrueShelf, a social-network for sharing exercises and puzzles especially in mathematics and computer science. After an year of testing and adding new features, now I can say that TrueShelf is out of beta. TrueShelf turned out to be a very useful website. When students ask me for practice problems (or books) on a particular topic, I simply point them to trueshelf and tell them the tags related to that topic. When I am advising students on research projects, I first tell them to solve all related problems (in the first couple of weeks) to prepare them to read research papers. Here are the features in TrueShelf 1.0. • Post an exercise (or) multiple-choice question (or) video (or) notes. • Solve any multiple-choice question directly on the website. • Add topic and tags to any post • Add source or level (high-school/undergraduate/graduate/research). • Show text-books related to a post • Show related posts for every post. • View printable version (or) LaTex version of any post. • Email / Tweet / share on facebook (or) Google+ any post directly from the post. • Add any post to your Favorites • Like (a.k.a upvote) any post. Feel free to explore TrueShelf, contribute new exercises and let me know if you have any feedback (or) new features you want to see. You can also follow TrueShelf on facebooktwitter and google+. Here is a screenshot highlighting the important features. # Recreational Math Books – Part I Most of us encounter math puzzles during high-school. If you are really obsessed with puzzles, actively searching and solving them, you will very soon run out of puzzles !! One day you will simply realize that you are not encountering any new puzzles. No more new puzzles. Poof. They are all gone. You feel like screaming “Give me a new puzzle“. This happened to me around the end of my undergrad days. During this phase of searching for puzzles, I encountered Graceful Tree Conjecture and realized that there are lots of long-standing open “puzzles”. I don’t scream anymore. Well… sometimes I do scream when my proofs collapse. But that’s a different kind of screaming. Sometimes, I do try to create new puzzles. Most of the puzzles I create are either very trivial to solve (or) very hard and related to long-standing conjectures. Often it takes lots of effort and ingenuity to create a puzzle with right level of difficulty. In today’s post, I want to point you to some of the basic puzzle books that everybody should read. So, the next time you see a kid screaming “Give me a new puzzle“, simply point him/her to these books. Hopefully they will stop screaming for sometime. If they comeback to you soon, point them to Graceful Tree Conjecture :) I will mention more recreational math books in part 2 of this blog post. # How to teach Algorithms ? Algorithms are everywhere. They help us travel efficiently, retrieve information from huge data sets, secure money transactions, recommend movies, books, videos, predict stock market etc. It is very tough to think about a daily task that does not benefit from efficient algorithms. Often the algorithms behind most of these tasks are very simple, yet their impact is tremendous. When I say “simple”, they are simple to people who know them. Most common people consider algorithms too mathematical. They assume that it is beyond their capability to understand algorithms. What they do not realize is that algorithms are often simple extensions of our daily rational thinking process. For example, almost everybody considers it stupid to buy an item for$24 and pay $6 shipping, if there is free shipping for orders of more than$25. If you add one more item of cost $1, you saved$5. Also, when we pack our bags for travel, most of us try to do it as “efficiently” as possible, trying to carry as many “valuable” things as possible while trying to avoid paying for extra luggage on airlines. We consider these rational choices. Algorithms are simply “step-by-step procedures to achieve these rational objectives“.

If you are an instructor (like me), teaching Algorithms, you might have noticed that most students (around 70%) are intimidated when they take a basic algorithms course. Most of them DO end up doing well in the course, but they consider the process painful. If they reflect on the course, most often they say “that course was a nightmare, I don’t remember what I learnt in that course”. They do not seem to have enjoyed the course. Probably they might remember 30% of the material. This is definitely not acceptable for such a fundamental course.

Often, when I comeback to my office after teaching, I say to myself  “I should have given them one more example, to help them get better intuition”. You can always do better job if you are given more time. Alas, we have time-bounded classes and almost infinite details to cover. We expect students to learn some concepts on their own and develop their own intuitions. We want to give “good” reading material. So, their understanding depends on how well these readings are written.

Today’s post is about “How to teach Algorithms ?” Here is one of my experiences, while I was teaching an undergrad algorithms course at GeorgiaTech. I was teaching dynamic programming. I gave several examples to make sure that they understand the paradigm. At the end of the class, almost 50% of class had questions, because this is the first time they saw dynamic programming. I told them to see me in my office hours. I quickly implemented a java applet to show how the matrix entries are filled by the algorithm, step by step. When I showed them this applet and a pseudo-code side-by-side (highlighting every current line of code being executed), almost all of the students understood the main idea behind dynamic programming. Some of them also said “it is easy”. I was glad and wanted to add more algorithms in my code.

The Kintali Language

The goal is to have a very simple to understand “executable pseudo-code” along with an animation framework that “understands” this language. So I started designing a new language and called it Kintali language, for lack of a better word :) . I borrowed syntax from several pseudo-codes. It took me almost two years to implement all the necessary features keeping in mind a broad range of algorithms. I developed an interpreter to translate this language into an intermediate representation with callbacks to an animation library. This summer, I finally implemented the animation library and the front-end in Objective-C. The result is the Algorithms App for iPad, released on Sep 20, 2012. This is my attempt to teach as many algorithms as possible by intuitive visualization and friendly exercises.

Features

Current version has Sorting algorithms (Insertion Sort, Bubble Sort, Quick Sort and MergeSort). The main advantage of my framework will be demonstrated once I add graph algorithms. I will add some “adaptive” exercises and games too. For example, one of the games is to predict what is the next matrix entry that will be filled next by an algorithm.

Also, I have the necessary framework to visually demonstrate recursion (by showing the recursion tree), dynamic programming (by showing the status (filled or waiting to be filled) of matrix entries), divide and conquer (by splitting the data) etc. Since the framework is ready, adding new algorithms will not take much time.

Here is a screenshot of Quick Sort in action.

Platforms

After I developed the interpreter, I was wondering what platforms to support first. I went ahead with iPad because I developed the interpreter in C. Objective-C is a superset of C. The Mac desktop version should be available in couple of weeks. In the long run I will implement the Android, Linux and Windows 8 versions too.

Goal

The big goal here is to “almost” replace an algorithms textbook. I added a button to access relevant wikipedia articles (within the app) describing the corresponding algorithms. With simple pseudo-code, intuitive animations, adaptive exercises and easy access to online articles, I think this goal is definitely achievable.

Questions

I have some quick questions to all the instructors and students of Algorithms.

• What algorithms do you want to see soon i.e., what algorithms did you have most difficulty learning/teaching ?
• What are some current methods you use to overcome the limitations of “static” textbooks ?
• Any more ideas to make algorithms more fun and cool to learn/teach ?

I wanted to write this post after achieving at least 100 downloads. I assumed this will take a month. To my surprise, there were 100 downloads from 15 countries, in the first 40 hours. I guess I have to add new features faster than I planned.

TrueShelf and Algorithms App are new additions to my hobbies. The others being Painting, BoardGames and Biking. Man’s got to have some hobbies. :)