GAMES ALGORITHMS PLAY The Monte Carlo Tree Search Algorithm

By Maya Indira Ganesh

In preparing for this workshop, I didn’t start out following the Field Guide structure. However, I eventually reverted back to it as a ‘container’ for my notes. I had set aside the question of the Field Guide as an appropriate, or not, organising principle/presentation format. I was actually curious about this as a method of writing about algorithms in this way: would it work, or not, and why? The structure of the Field Guide was appealing because it rejected  established conventions of how algorithms are (expected to be) discussed. This wasn’t the language of mathematics or computer science, but offered metaphor and wit as tools with with to examine the subject.

Background
British sci-fi writer Arthur C Clarke was approached by a publisher to contribute to a new publishing scheme: short sci fi stories that would fit on to the back of a postcard, which readers would want to post to each other. The scheme never took off but Clarke contributed a 180 word story anyway:[1]

Earth’s flaming debris still filled half the sky when the question filtered up to Central from the Curiosity Generator. “Why was it necessary? Even though they were organic, they had reached Third Order Intelligence.”
“We had no choice: five earlier units became hopelessly infected, when they made contact.”
“Infected? How?”
The microseconds dragged slowly by, while Central tracked down the few fading memories that had leaked past the Censor Gate, when the heavily-buffered Reconnaissance Circuits had been ordered to self-destruct.
“They encountered a – problem – that could not be fully analyzed within the lifetime of the Universe. Though it involved only six operators, they became totally obsessed by it.”
“How is that possible?”
“We do not know: we must never know. But if those six operators are ever re-discovered, all rational computing will end.”
“How can they be recognized?”
“That also we do not know; only the names leaked through before the Censor Gate closed. Of course, they mean nothing.”
“Nevertheless, I must have them.”
The Censor voltage started to rise; but it did not trigger the Gate.
“Here they are: King, Queen, Bishop, Knight, Rook, Pawn.”

Building a program that could control these six operators was, we now know, a significant ambition amongst computer scientists. To build Deep Blue was a key moment in the discursive shaping of what AI is thought to be.  The idea that a computer that could beat a human at Chess was, in fact, intelligent, possibly says a good deal about what computer scientists thought intelligence was than about whether the computer was actually intelligent or not. The building of game software to challenge humans continues as a tradition in computing and AI because the context is a relatively low-risk one in which to develop smart, fast, and flexible algorithms. The development of such software takes on some significance now for its connection to advances in machine learning.

In March 2016, DeepMind, a London based start-up recently acquired by Google, presented their software AlphaGo at a match against Lee Sedol, the world’s fourth best Go player. Go is an ancient Chinese board game, a perfect information game (where all the information I.e moves, options, points etc are visible and accessible to both players) played on a  19×19 board with black and white stones that must be ‘captured’, much like in Chess. Like Chess, Go is as much about the style and elegance of game-playing as about winning stones. Go is a notoriously complex game because of the vast number of possible moves: despite its simple rules there, famously, more possible moves than there are atoms in the universe. There are  10 to the power of 107 (cannot get the superscript to work here) possible positions in Go, and for this reason presents a considerable challenge to software developers.

The challenge for software development is being able to compute all the possible moves, and learn how each move works, how successful it is, when it can be used. As any Chess, Checkers or Go player knows, playing a move is contingent on how it has been evaluated in terms of its outcomes; the game is all about being able to represent what the other player will do, and stay a few moves ahead of the current one. In Chess, with Deep Blue, all positions and their outcomes were coded in, and the software had to search for the right move based on the positions on the board. In this way it is often referred to as using ‘brute force’ algorithms with selective search algorithms and evaluation techniques that evaluated the strength of a move. It is not possible to code in all possible moves and options, and evaluations of them, at the scale of Go. So, there had to be some other way to address the problem of building software that could beat a human.

AlphaGo eventually beat Lee Sedol 4-1.  The AlphaGo software runs on the Monte Carlo Tree Search algorithm with value networks and policy networks, new features that give it the power to make decisions, evaluate them, and remember how they work. Alpha Go was developed in the following way, as told on the Google Deep Mind blog:[2]

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent) But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.

Therefore this field guide entry attempts to investigate the Monte Carlo Tree Search.

The Field Guide to Monte Carlo Tree Search Algorithm

Internal Physiology:
The Monte Carlo Tree Search Algorithm  (MCTS) is comprised of branches, nodes, leaves (also sometimes referred to as  ‘child’ extensions).

External Physiology:
The ‘Monte Carlo’ part of the MCTS refers to how casinos investigate and establish all the different possible moves and their outcomes in games of probability. MCTS work to come up with strategies to achieve a desired goal. MCTS do not use  ‘if-then’ rules but works by ‘looking ahead’:  the MCTS is a tool to anticipate or simulate, outcomes,make decisions and evaluate them.

Borrowing its name from the morphology and physiology of trees, the MCTS works to address the complexity of a problem, or its depth by building a tree of actions and outcomes. MCTS does so by randomly simulating actions, and then evaluating those actions.  The tree structure helps discover strategies and paths in a complex and unknown landscape.

There are four phases of how MCTS functions: selection, expansion, simulation, back propagation that are well documented here but cannot for some reason be inserted into this blog post!

http://www.cameronius.com/research/mcts/about/index.html

Nodes are selected based on techniques called “confidence bounds” and the “multi armed bandit problem”.
“Tree Policy” establishes the ways in which the algorithm will proceed. It balances exploration (investigating new areas) with exploitation (delving further into the options that look the most promising)[1].

MCTS builds maps of unknown territories. Unlike its predecessors, MCTS is freed from having prior contextual information about a system. Earlier, predecessors like Minimax, Alpha Beta that were used in the development of the Grandmaster-defeating software, Deep Blue, a set of rules had to be specifically defined. MCTS has “contextual freedom”. It is also asymmetric in its growth, it will revisit more ‘interesting’ nodes if they generate successful outcomes and results. “Move quality” gets better with time. So, not all options are equally followed, particularly in cases like Go and with reinforcement learning. This is the big difference from Chess or brute force approaches where all possible options are examined. This is just not an option any more, so there needs to be a mechanism by which the algorithms can privilege one more successful option over another; in Alpha Go, ‘value networks’ and ‘policy networks’ as aspects of the reinforcement learning program that allow it to do so. This also results in ‘asymmetric’ tree growth as seen here if you scroll down:
http://www.cameronius.com/research/mcts/about/index.html

In the context of more powerful machine learning techniques, MCTS is enhanced by reinforcement learning: each successful step is evaluated for how it is associated with the final outcome. For example in the context of Chess: it is not just a final, significant result I.e capturing the King, compromising the Queen, but all the steps that led there that are equally important.  Go is a difficult challenge for MCTS because it has a high ‘branching factor’; so it does not move fast enough to achieve all the possible outcomes. In more recent advances in Go, MCTS that was developed more strongly after 2006 by Remi Coulom, had had value networks and policy networks added on that address the speed factor. MCTS is not the fastest rabbit, but it is a very thorough one. Balancing thoroughness with speed is a challenge.

Niche:
Anything where decisions have to be made and complex, unknown scenarios mapped out for results.
It has applications in looking at how scenarios unfold; managing situations that are ‘uncertain’ , from virus replication to maximising profit in financial models, and in casinos.

Range:
Decision making, scenario mapping, games,  logical scenarios.

Alimentation:
The MCTS ‘eats’ random simulations to generate outcomes. It eats (eats into) complexity, unknown territories, in order to grow.

“Versions of MCTS that are used in Alpha Go actually eat history too, and are trained by being fed 30 million moves from games played by human experts till it could predict the human move 57% of the time.[2]”

It also eats parallel processing power. The more powerful the engine the more smoothly and powerfully the algorithm runs. The version of MCTS in in AlphaGo can evaluate 60 billion moves in three minutes.

Excretion:
It excretes decisions and outcomes and pathways that don’t work. It discards them. In more sophisticated reinforcement learning MCTS however, nothing is really excreted, everything is recycled back into the system as ‘learning’.

Coloration:
Green and glittering  (Trees and casinos)
“The beauty of optimisation”

Vocalisation:
Like a slot machine: games of probability resulting in options that do or do not work.

Sentimentalization:
Neurosis.
You worry about the outcomes,
How do you know which decision is going to result in ‘success’?
But there is actually no need to because all possible scenarios are being simulated and can be known.

Affinities/ Antagonisms:
Bayesian probabilities

Relationships/Ecologies:
Simulations
Unmapped places based on logic.
Bayesian
Needs  discrete generational structures
Assymetric

Genealogy:
Chance, probability, dice, slot machines, gambling, simulations.

Mythology:
Brute Force
(Not sure)

Life History: The evolution of the MCTS

Minimax[3]
~ Everything is fully determined, all nodes, rules and moves; but this can take a lot of time to map out especially in games with high branching factors. So, there is ‘pruning’ of trees to discard not-valuable moves. Evaluation techniques help identify the most successful moves.

Alpha Beta
Alpha Beta builds on Minimax and is a way of identifying the most successful nodes

MCTS
As described

Reinforcement Learning
Not part of the evolution but an important step that has affected all of computing. Reinforcement learning makes more effective MCTS possible because the system is evaluated and encoded with what results in success. Like in operant conditioning, it rewards the moves that result in success, causing a connection (ie association, memory) to be established.

MCTS rollouts[4]
For large trees, to more accurately perceive the value of children nodes and with deep neural nets to guide the search. These neural nets are “policy networks” and“value networks” . The first selects the move to play, and the latter assesses its success and its value in the system.

Reproductive Behaviour:
MCTS presents a saga of promiscuous connection resulting in a dense grid of memory and history resulting in extensive generational sagas. Reproduction is based on getting lucky and seeing what works I.e any and all, random connections to see what results in a successful outcome. Hermaphroditic, combining possibilities of complete reproduction within itself, like an earthworm, the MCTS’ reproductive behaviour seeks to reduce the time and efficiency by which new, efficient, successful children can be made. With time and in every generation, there is more effective, successful coupling resulting in faster ways to achieve successful children.

 


[1] Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis and Simon Colton. IEEE Transactions on Computational Intelligence and AI in games. VOL. 4, NO. 1, MARCH 2012  http://www.cameronius.com/cv/mcts-survey-master.pdf
[2] http://googleresearch.blogspot.de/2016/01/alphago-mastering-ancient-game-of-go.html
[3] http://drops.dagstuhl.de/opus/volltexte/2013/4332/pdf/3.pdf
[4] http://googleresearch.blogspot.de/2016/01/alphago-mastering-ancient-game-of-go.html


[1] Isaac Asimov’s Science Fiction Magazine, First Issue, Vol 1, No. 1, Spring 1977 https://www.research.ibm.com/deepblue/learn/html/e.8.2.html
[2] Google (2016). Alpha Go: Using machine learning to master the ancient game of Go. https://googleblog.blogspot.de/2016/01/alphago-machine-learning-game-go.html retrieved April 10, 2016

Open Educational Resources (OER) Counselor and Advisor (OERCA)

Screen Shot 2016-06-09 at 6.49.49 AM

 

Reading Algorithms Assignment
Liza Loop, 8 June 2016

A Description of Proposed AI Agent:

Open Educational Resources (OER) Counselor and Advisor (OERCA)

 

 

 

Liza’s questions after attempting to “execute” the assignment.

• If an algorithm is a set of “steps”, is it the steps an AI entity will take to accomplish the purpose it was built for or the steps the builder takes to create it?
• Can one discuss an algorithm (set of steps) without specifying the system within which it executes?
• How does an algorithm differ from any other description of the solution to a problem? Does it have to be executable? Is it different from a program?
• Are all problems or questions addressable through algorithms?
• When specifying an algorithm where does one put the criteria for a successful result?

Open Educational Resources (OER) Counselor and Advisor (OERCA)

This “entity” has the job of helping learners find OERs that have a high probability of being within their zone of proximal development, are intrinsically interesting (playful), and advance any consciously expressed educational goal/pathways/ladders.

Internal physiology

⁃ user experience interface
⁃ immediate objectives query
⁃ user profile intake
⁃ worldwide OER search
⁃ OER learning object analysis
⁃ learning experience tracking
⁃ learner feedback on efficacy
⁃ self evaluation module
⁃ comparison with other recommendations
⁃ self improvement module

External physiology

• entity is accessible through the individual learner’s Open Educative Systems Portal
• appearance adapts to learner preference profile, may be visible on screen, audible, braille or appear as VR character
• level of assertiveness and intrusiveness set by learner profile
• provides links to specific OER learning objects in response to specific or implied learner queries
• monitors learner online activities for implied queries

Niche

Social — personalization, social network analytics, data gathering, crowd sourcing

OERCA performs all of these social functions. It is specifically part of the (hypothetical) Open Educative Systems proposed by Liza Loop.

Range

— What are the multiple ecologies this species inhabits? What is the distance and duration of the algorithms activities?

OERCA is available to anyone who has online access via any multipurpose device. It is a hybrid system — self-organizing, learning and modifying (i.e. an AI entity) monitored by the (hypothetical) OES standards committee. It connects to individual learner profiles, the OER swarm, learner-organized groups (classes), certification organizations

Alimentation — What does it ‘eat’?

It eats (takes in) it’s own harvested data, input from it’s learner and/or parents and human coaches of dependent learners.

Excrementation — What does it ‘shit’? What are its traces or signs? How does one track the animal?

• Presence on/in the user interface
• Advice to independent learners and/or parents and human coaches of dependent learners
• Modifications of itself
• Data used as input to other agent entities
• Changed behavior of its learner

Coloration — How to address the aesthetics of the algo? (visualisation, what does it look like?)

OERCA’s presentation interface is initially self-modifying depending on the preferences it extracts from the learner’s profile. Once this preliminary avatar self-constructs it invites the learner to play with changing it. New preferences discovered through this interaction can be fed back into learner’s profile at the learner’s discretion.

For example: A learner’s OERCA avatar might appear as a Frank Sinatra look-alike singing a version of “I did it my way” that includes interpretations of what the learner is trying to do and which OER resources might help accomplish this goal.

Vocalization — How to address the aesthetics of the algo? (sonification, what does it sound like?)

See Coloration

Sentimentalization — How does it feel? How does it make you feel?

OERCA should appear friendly, encouraging, trustworthy (how it feels to user)
Intrigued, challenged, respected, cared for (how user feels when using it)

Affinities / Antagonisms — What is it like? What is it not like? Friends and enemies?

like other successful personalization engines
like other user-modified avatars
like other data mining systems
like human counselors, teachers or coaches
like diagnosticians

unlike single-solution, linear algorithms
unlike “simple” systems
unlike un-networked or small memory systems
unlike deterministic systems that generate predictable outcomes

Relationships / Ecologies — What is it related to? What is “like” it, what other species is similar?

targeted advertising
decision support systems
data mining systems
user modifiable systems
learning machines

Genealogy — This would likely be a person / research group or party that invented / patented / etc., said algo. The standard system of genealogy is Domain, Phyla, Class…etc.

Domain = AI
Phyla = self-modifying systems (learning systems)
Class 1 = targeted advice
Class 2 = data mining
Class 3 = educational management

Political Field Guide to Studying Algorithms

By Group 4 (Ann-Christina, Irina, Marcell, Armin)

For us the challenge was not so much to categorize algorithms, which seems impossible considering their diversity and spread (see our prologema posted earlier), but rather to identify which algorithms are of political signifcance, to reverse engineer them from their appearance as black boxes, and then to identify their operations in their sociotechnical milieus and political domains.

We discussed three examples:

  • high-frequency trading algorithms, particularly those embodying three strategies: scalping, spreading and market-making (including spoofing – illegal).
  • the Volkswagen Dieselgate ‘Defeat Device’
  • Google’s PageRank

IMG_6703  IMG_6701IMG_6702

Algorithms and ideas the groups are working on/with

Algo 1

  • Target’s guest marketing analytics
  • Menstrual synchrony (biological, actually putting things in alignment)
  • Black-Scholes model (finance)
  • Vincenty’s algorithm (measuring distance on earth)
  • Forceattacks 2 (graph drawing)

Algo 2

  • algorithmic interruption
  • -||- demention
  • authistic algorithm

Algo 3 + Algo 7

  • self organisating algorithms, organisation of algorithms (conceptual approach)
  • game of life
  • proprietary algorithms!

Algo 4

  • three kinds of high frequency trading algorithms (spreading, sculping, market making [with spoofing])
  • Diesel gate
  • (sorting algorithms)
  • page rank

Algo 5

  • NSA’s Skynet-program (random force algorithm)

Algo 6

  • slither.io (game; behaviour of algorithms)
  • merge sort (sorting algorithm)
  • walk-through-raster (markov chains; algorithmic art)
  • narrow-sensory-sleep-algorithm (acoustic > physiological effects)

Algo 8

  • perceptron backpropagation (learning algorithm)

Please visit Experiments&Interventions for full workshop documentation!

Prologema/Provocation To A Possible Classification of Computational Algorithms

[By Group 4 – Ann-Christina, Irina, Marcell and Armin (with Liza and Thomas) – please feel free to come and talk to us about this!]

  1. Complex phenomenology

It is extremely difficult to encounter algorithms – with its scripts and operators, or logics and controls – in the wild. Algorithms, cultural or computational, roam the world freely, sometimes at incredible speeds, through various matters whether cultural practices and performances, or fibre-optic cables on land under sea or in the air, or in swathes of data centres spread around the world. The algorithm can feel at home nearly everywhere, and can mutate to take on many different forms.

We can encounter algorithms simply as a cultural script without obvious symbolic, and perhaps primarily affective content. More often though it becomes knowable through its symbolisation in languages, whether these be natural human languages or – more regularly today – programming languages. It is at the moment of their conception and birth that algorithms can become recognizable as such – with clear beginning and ends, clear steps of movement and operation, and clear instructions for execution.

Yet in the wild algorithms – when not kept in a zoo or flashing up on a screen or visualised in other ways – appear mostly as black boxes, which make it impossible to know whether we are encountering an algorithm at all. We can perceive effects, traces and excrements of algorithms, without knowing whether they necessarily belong to them. We can try to measure inputs and outputs, and attempt to reverse engineer the algorithms that processed them. But often algorithms are to complex to be able to reconstruct them.

Algorithms largely escape our sensory apparatuses, since they operate often at speeds below our threshold of perception, and in media not accessible to us – silent, invisible. Making algorithms perceptible therefore always requires mediation – whether it be a computer screen at its moment of inception, a visualisation of its effects or its operations, or an algorithm itself programmed to capture and expose its peers.

  1. Classifying Algorithms – Culture and Computation

The world is swamped by algorithms. Many of them are too small and too insignificant to take note of them. A full classification of all algorithms would explode any book. Instead we propose two restrictions on what algorithms we want to focus on in this guide.

First, we want to focus on computational algorithms. Algorithms at a very basic level understood as logic plus control, or a sequence of symbols connected by operators, are everywhere. ‘Algorithm’ is an excessive analytical category. And although we need to be aware of population of algorithms that exist, as well as their genealogies, we propose to focus on computational algorithms. Algorithms must be symbolized in executable language – computer code – in order to be executed by computers. It is these algorithms, on the basis of the massive expansion of computational capacities and powers, that have overtaken the world and which are some of the most powerful agents in our world today.

Second, we choose to focus on socially and politically significant algorithms. Computational worlds are populated by a large number of minute and if not meaningless then certainly quite dull algorithms. No student of algorithms would want a guide for these. Instead, it is the algorithms which have gained in complexity and which are serious actors in our worlds which must be classified. Algorithms that yield power and which are often impenetrable as black boxes, protected both by the milieus of their operation and the opacity produced by their secrecy.

  1. Time

One possible starting point for thinking about the classification of algorithms is the category of time. Time is of the essence to algorithms – every execution requires time, and often the speed of execution is key to the success of its operation, its effect on the world, and its harmony with its peers. Overall algorithms are marked by a fundamental time-sensitivity – steps must be followed, results are expected, other algorithms are waiting to continue the work. We propose a preliminary classification based on time:

  • Synchronicity. Algorithm which produce synchronicity and for which synchronicity is crucial, e.g. those that establish the time of the internet or through triangulation of geographical data establish positions (e.g. in GPS). Here the challenge for algorithms is to establish synchronous time.
  • Competition. Algorithms for which competition is central, which compete on time with each other and whose operations can only succeed if they conform to certain timely requirements. These often operate in spaces where time itself as a category is up for grabs, where their milieus are constituted by a multiplicity of temporalities. (e.g. in high-frequency trading). Here time is out of sync.
  • Sequentiality. Algorithms for which sequencing is central, where the sequence of steps and the tact of their operations is key – not only for its internal operations, but for a collaboration among several algorithms. These are algorithms that often live in herds, which rely on each other to get work done on time, together, for each other. Here the different times of algorithms need to come into snyc.

Transgressive Cookie Algorithm

Algorithms are often referred to as recipes. A wiki definition notes: “An algorithm is a step-by-step list of directions that need to be followed to solve a problem. The instructions should be simple enough such that each step can be done without thinking about it. Algorithms are often used to describe how a computer might solve a problem. But there are algorithms in the real world too. A recipe can be considered a type of algorithm. It tells what ingredients are needed to make the dish and what steps to follow. If the recipe tells exactly what to do without too much confusion, then it is an algorithm.”

Yet, the following of directions ‘without too much confusion’ can–and often does–yield many different approaches to solving a problem.

A chocolate-chip cookie recipe / algorithm, for instance, materializes in a wide range of ‘media’ and forms, including:

30574_Cookie-Testing02TEXT

Algorithms are often encountered within computational and digital spaces as codes enacting very specific scripts. But there are many opportunities for algorithms, as technical ensembles, to go deviate from, swerve and transgress the initial recipe to arrive at much different engagements and outcomes.

How might chocolate chip cookies, and a weird logic for how to follow instructions, open up these more indeterminate registers for how algorithms do work in the world?

Here, Boolean logic breaks down (or opens up) into an interpretive showcase, a speculative unfolding, and a less deterministic technological encounter.

Field Entry R.A.P. Analysis Project

Reading Algorithms Filed Entry

Christoph Brunner

R.A.P. Rap Analysis Project

Using a database of rap lyrics 1980 to 2015 the goal of the project was to develop a system for hit-prediction based on topics, places, brands, and vulgarity retrieved from the song contents. The initial model applied was a bag of words model. Later it was replaced by a support vector machine.

http://people.ischool.berkeley.edu/~nikhitakoul/capstone/index.html#features

Internal physiology

support vector machine (SVM)

–           based on supervised learning:

In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples.

Uses digression and classification analysis

  • Classification as a form of pattern recognition
  • Digression as estimating the relationships among variables

 

External physiology

In terms of visualization the insights from the analysis merged into maps for places referenced in each song with the aid of Alchemy API. Visualization has been axchieved using CartoDB, 3, and Tableau. Not accessible anymore it was possible not only to classify specific songs according to their probability of being a top 100 charts hit as specific historical moment but also to create probabilities of a song’s content to become a hit in the future.

The physical functions of the system are mostly visual expressions like maps to places referenced in songs and their correlation with crime rates (high correlation). Another feature, not accessible anymore, is the language output of specific words, their vulgarity and their profane character.

Themes of the songs can be expressed through the Mallet engine: A Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

Another example of the extraction of data can be shown in the use of brands mentioned in songs, which have been visualized in a word cloud.

 

Niche

The niche that nurtures the system derives from academic research as a capstone project for the MIDS program at the University of California, Berkeley. From this research environment the algorithm nests in the rich database of rap lyrics retrieved from rapgenius. Its main food are words from which classifications derive. Its limitations, ecological boundaries might be defined by other aspects of rap such a style, visual representation, class, race, gender, which might only partially become apparent through a language-only analysis of rap.

Its sphere of application might be culture industry where analytical power can be used to model songs that have a high poissibility of becoming a top 100 hit based on the words used in its lyrics. What is absent here are of course crucial elements such as beats, rhythms, loops and breaks.

 

Range

For the time being it is a limited range while different algorithms are combined which are some of the most generic algorithms used in order to retrieve specific information from large sets of data based on supervised learning.

 

Alimentation

Rap lyrics from 1980 to 2015.

 

Excrementation

Specific outputs based on groups of data around specific words. Mostly visualization and mapping. Tracking is simply achieved by looking at the combination of supervised learning instructions and the aligned output methods.

Screen Shot 2016-06-08 at 12.33.44 Screen Shot 2016-06-08 at 12.34.28 Screen Shot 2016-06-08 at 12.34.40 Screen Shot 2016-06-08 at 12.35.32 Screen Shot 2016-06-08 at 12.34.59 Screen Shot 2016-06-08 at 12.35.21

Coloration

Visualizations occur in graphs based on quantitiy of specific words that are formed into topics. The visual output character is rather simple, unattractive, and static.

 

Vocalization

No idea – this is the great gap. The output is mostly based on words like vulgar language which then would have to be put back into an analog mode of expression – rapping it.

 

Sentimentalization

It catches me by surprise, however the full operationality of predicting language might serve a larger impact and attraction.

 

Affinities/Antagonisms

This question surpasses my limited knowledge about algorithms

 

Relationships/Ecologies

The system creates a historical relation to the genealogy of rap lyrics, positions rap in a wider cultural climate with specific themes arising at specific points in time and finally provides potential use for the music industry. The relationships are historical, aesthetic, and economic at the same time.

 

Genealogy

Most of the system’s ingredients have been developed at different stages of computation mathematics.

 

The original SVM algorithm was invented by Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1963.

 

Supervised Learning and Machine learning has its roots in Arthur Samuel’s work defining machine learning as “Field of study that gives computers the ability to learn without being explicitly programmed” (1959). The basic principal derives from Alan Turing’s 1950 paper “Computing Machinery and Intelligence” where the question “Can machines think?” should be replaced with the question “Can machines do what we (as thinking entities) can do?”

 

The earliest form of regression was the method of least squares, which was published by Legendre in 1805, and by Gauss in 1809. Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821, including a version of the Gauss–Markov theorem.

 

The development of the R.A.P. analysis system has been realized through a capstone project for the MIDS program at the University of California, Berkeley in 2015.

 

Mythology

Moving from cognition to operation, do we lose the cornerstone element of AI principles?

 

Life history

The different elements are resilient due to their modulatory nature being applied in various analyses of large amounts of data.

 

Reproductive behavior

Polyamorous sprawl of offspring-applications.