Object and scene detection with #AI

Continuing the previous #ArtificialIntelligence theme. Wanted to see what and how does Amazon’s rekognition work and different from the #AI offerings from the others, such as Microsoft.

Here is a #ProjectMurphy image’s confidence score. I am glad to see that there is a 99% confidence that this is a person.

Object and Scene detection

The request POST is quite simple:

{
 "method": "POST",
 "path": "/",
 "region": "us-west-2",
 "headers": {
 "Content-Type": "application/x-amz-json-1.1",
 "X-Amz-Date": "Thu, 01 Dec 2016 22:21:01 GMT",
 "X-Amz-Target": "com.amazonaws.rekognitionservice.RekognitionService.DetectLabels"
 },
 "contentString": {
 "Attributes": [
 "ALL"
 ],
 "Image": {
 "Bytes": "..."
 }
 }
 }

And so is the response:

{
 "Labels": [
 {
 "Confidence": 99.2780990600586,
 "Name": "People"
 },
 {
 "Confidence": 99.2780990600586,
 "Name": "Person"
 },
 {
 "Confidence": 99.27307891845703,
 "Name": "Human"
 },
 {
 "Confidence": 73.7669448852539,
 "Name": "Flyer"
 },
 {
 "Confidence": 73.7669448852539,
 "Name": "Poster"
 },
 {
 "Confidence": 68.23612213134765,
 "Name": "Art"
 },
 {
 "Confidence": 58.291263580322266,
 "Name": "Brochure"
 },
 {
 "Confidence": 55.91957092285156,
 "Name": "Modern Art"
 },
 {
 "Confidence": 53.9996223449707,
 "Name": "Blossom"
 },
 {
 "Confidence": 53.9996223449707,
 "Name": "Flora"
 },
 {
 "Confidence": 53.9996223449707,
 "Name": "Flower"
 },
 {
 "Confidence": 53.9996223449707,
 "Name": "Petal"
 },
 {
 "Confidence": 53.9996223449707,
 "Name": "Plant"
 },
 {
 "Confidence": 50.69965744018555,
 "Name": "Face"
 },
 {
 "Confidence": 50.69965744018555,
 "Name": "Selfie"
 }
 ]
}

Here is what the facial analysis shows;

Facial Analysis

However how does it handle something a little more complex perhaps?

Object and Scene detection

And finally, what of the comparison? I think there might be some more work to be done on that front.

Face Comparison capture

Here is the response:

{
 "FaceMatches": [
 {
 "Face": {
 "BoundingBox": {
 "Height": 0.3878205120563507,
 "Left": 0.2371794879436493,
 "Top": 0.22435897588729858,
 "Width": 0.3878205120563507
 },
 "Confidence": 99.79533386230469
 },
 "Similarity": 0
 }
 ],
 "SourceImageFace": {
 "BoundingBox": {
 "Height": 0.209781214594841,
 "Left": 0.4188888967037201,
 "Top": 0.13127413392066955,
 "Width": 0.18111111223697662
 },
 "Confidence": 99.99442291259765
 }
}

Playing with #AI

So, been spending a lot of time recently around many things related to Artificial Intelligence (#AI).  More on that some day. 🙂

Was curious about yesterdays Amazon’s announcement to jump on this bandwagon. Of course Microsoft and others have been there. I don’t know to what extend has Amazon been working on this, but given Alexa has been out for a couple of years, I know they have had rich pickings of tuning this further.

I thought Polly (like the parrot?) was quite different from the things I have seen from others. This is a text-to-speech, where it renders the inputted text into various dialects and you can have a few outputs for those too. It supports a few dialects (for the synthesized speech) and one can use it using a simple API (the Android example shows it is not very complex to consume, of course you still need to think about the overall design and elements of Software Engineering, latency, limits, bandwidth, etc.). Should you desire you can customize it using pronunciation Lexicons that allow one to tweak this.

Here are a few examples, of course none of them are me, and hence the “cold”.

Australian (Male):

Indian (Female):

Italian (Male):

US/American (Male):

Of course if you play with it, it is easy to pick up the patterns and what is being changed, versus not. But kudos to the team on this. I think it will help accelerate the adoption of #AI.