Card Shuffling – oh, how the turn tables

The last time I wrote about this, I’d found a formula that you can solve algorithmically to find the minimum number of shuffles it takes to resurrect an even numbered deck of cards. I’d also found two solution series – for decks of size 2N , and 2N -2. Since then, i’ve made some exciting progress!
Continue reading “Card Shuffling – oh, how the turn tables”

Advertisements

Some more card shuffling

I’ve written about this before (here).

When we left off, we had a formula to solve that would give the length of a single loop:

P*2N mod (C+1) = P

P is the position of a card in the loop, N is the number of shuffles, and C is the number of cards. Essentially, you take the starting position, and keep doubling it – if it goes above the number of cards in the deck, take the remainder after dividing it by (C+1). Once it hits it’s starting position again, the loop has ended.
Continue reading “Some more card shuffling”

An explanation of how Euler solved the Basel problem

NOTE: I did not make this explanation. I based it on the explanation from the book “Journey through Genius” by William Dunham, and changed a few parts – I liked how much clearer this book’s explanation was than the other, more formal proofs I have seen.

What is the Basel problem?

The Basel problem had stumped mathematicians for years before Euler came along. It asked a question about this infinite sum:

\frac{1}{1}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+...

Continue reading “An explanation of how Euler solved the Basel problem”

Mainframes Joyce, and more on Neural Networks

After a few more days messing around with the neural network (this time i’ve trained it on 4 James Joyce books, around 4 MB of text) I decided to try making a whole book. I found out that Finnegan’s wake is roughly 1.3 MB of text, so I made 5 books of 1.3 MB length, altering a few variables on the neural network each time to see what changed. I picked the one I liked the most (temperature=0.9, for reference), and then spent a few hours formatting it. Then I went to Lulu and had it printed, because why not (The texts are all available in on Github).

Continue reading “Mainframes Joyce, and more on Neural Networks”

Neural Networks and James Joyce

Neural Networks are one of my favourite parts of computing – they allow you to do an incredible range of things, from turning handwriting into text, to creating mildly incoherent literature. I’ve played around with creating one before, to count syllables for a haiku generator. That didn’t work so well, so I’ve decided to play around with one already written by someone else, and see what I can do with it.

The first thing I thought of was to make a haiku generating neural network, but it’s hard to find databases with text files of thousands of haiku. Instead, I’ve decided to train it on James Joyce novels. Andrej’s Github (linked above) gives a really good overview of what you need to do to get his neural network up and running, so i’ll leave that bit out.
Continue reading “Neural Networks and James Joyce”

Taylor Series

In this post, I want to have a look at a really neat part of calculus, Taylor series. It’s essentially a process you can use to approximate some graph, e.g. ex, using a polynomial – the idea is, the higher order the polynomial, the more accurate the approximation gets.

Let’s dive in! (This gets a bit tricky)

How does it work? First, some terminology – the first order approximation is a linear approximation (so ax+b), second order is quadratic (ax2 + bx + c), and so on. So let’s start with a first order approximation of, say, ex.
Continue reading “Taylor Series”

Blog at WordPress.com.

Up ↑