Well from a programming perspective there's a number of ways that AIs can "learn", however most of them are only good within one or two respective areas. Sometimes you have to fine-tune your learning AI to whatever task you're trying to get them to learn about. I don't know about amiibos in particular, but a lot of AIs have a Stimulus -> Response type system where they "detect" something and then respond to whatever they've "detected." For example, if it detects an enemy within grab range, it might throw out a grab. It might also realize things like "When I throw out a grab in certain situations, 70% of the time I get hit, so maybe I'll try dtilt instead". However, without the things in our hands right now, we don't really know.
There are some review amiibos out right now, but I don't think many of the people who own those are computer scientists or hackers, so they probably don't know. I suspect that the amiibo's learning algorithms might be stored on the Wii U version of Smash Bros, and that the amiibo themselves only store variable data, so it might be possible to figure out how they learn just by looking at the code for that game and doing some datamining.
One thing the amiibo seems to do is copy what others do, so there's some kind of duplication happening there. I don't know how much this applies to ATs or combos, but I intend to experiment with it a bit.
We also know that the amiibos after significant training exceed the abilities of Level 9 CPU counterparts by a significant margin. Furthermore it has been shown that an amiibo who trains against a skilled human opponent will do better than an amiibo who trains against computer players, and in fact it's not even comparable. The human-trained amiibo will defeat the CPU-trained amiibo even if the CPU-trained amiibo has been fed equipment.
My best guess is that amiibos learn in two different ways: By duplication and by trial and error. That said, that's just a shot in the dark, and we wont really know until we have the game for a bit.