So far we’ve only been dealing with motion phenomena that are either random or are controlled by external factors like user input or forces. But living beings not only have agency to move, but also are purpose-driven. This is what we usually mean when we say life-like motion.
For example, take a scenario where a sheep has left its herd managed by a cowboy. This is our external stimuli. Now the cowboy would like to retrieve the lost sheep (the goal). So he would get on his horse and steer the horse through the terrain, both avoiding obstacles and moving towards the sheep as fast as possible. Then the movement of the horse, where the autonomy of its movement comes from the cowboy is life-like motion [Reynolds 99]
Similarly, imagine a gazelle that is hungry (internal stimulus) in an open plain field. The movement of the gazelle about the field would reflect its desire to find and eat grass. And suppose a cheetah is nearby and the gazelle realises it. Now in order to survive the gazelle moves differently compared to when it was grazing.
Therefore, to simulate such motion phenomena is essential. That is we want to simulate entities that move in response to stimuli and/or an internal goal - agents that are autonomous in nature. A fundamental characteristic of such agents is that it does not know the exact path to take in order to achieve its goal(s). But rather at each instant of the situation / environment it is in, it will actively decide to move according to its goal.
For eg: the gazelle when evading the cheetah might be running straight for some time, but then suddenly turn, if it is a better option at that instant.
This characteristic of autonomous behaviour will be the basis, in our attempts to simulate such behaviour.
As described in the previous , and from the name we can derive an intuitive definition that an entity that has both autonomy and agency is an autonomous agent (AA). They are entities that have the ability to act and react in its environment, independent of any external control or intervention.
Now, the ability to act and react, means to change some state of the environment - be itself or some other component. Which requires two things:
The ability to perceive at least some part of the environment - in order to react, the agent would first have to make some observation. When the agents are enabled for physical motion, this also implicitly implies that there are components other than the agent itself in the environment.1
The second is not always a necessary requirement, but given the context of motion phenomena / behaviours, the agent must be a part of the environment in order to act (move) in it.
In order to differentiate between the AA of this project and a general AA, a classification must include these characteristics of our autonomous agents. Following suit of [Reynolds 99], we then classify our agents as,
Isolated or Situated:
An agent is situated when there are other entities along with the agent in the environment. In other words, there is ‘something’ outside of the agent that is also part of the environment. This property enables the agent to interact with the environment. A chatbot, whose entire environment is its own screen (where all inputs are made part of the bot) is an isolated agent. A power grid controller is a situated agent, along with many other components of the grid
Embodied or Abstract:
Embodiment essentially refers to the second requirement mentioned previously. It is the physical existence of the agent within the environment. For example, an email bot which filters out important and spam emails from your inbox need not have a physical presence, but is rather activated through some command / situation. Meanwhile, a vacuum cleaning robot can clean the environment (house) precisely because it exists in it.
Reactive or Deliberative:
The difference between reactive and deliberative agents lies in its ability to reason and think about the response to given stimuli. The former does not possess such abilities - response is an immediate output, while the latter deliberates and learns to optimise its output.
Clearly, since we are handling autonomous agents that possess motion behaviours, our agents will fall under the categorisation situated, embodied and reactive. Here, by motion behaviour, we refer to the life-like motion of our entities.
To further break down the process of simulating motion behaviours in autonomous agents, we use a model that contains three stages: action selection, steering behaviour and locomotion. Using the examples from let's look at how these individual stages look.
Action Selection :
Involves goal setting, prioritizing in response to the environment. For the cowboy this was to pursue/find the sheep. For the gazelle, this was to eat when hungry and run away when chased by a cheetah.
For most cases, we will consider the chase and the grazing as two different simulations, but in some instances we will also look at combining these two behaviours and prioritize one over the other - when there is lack of threat the gazelle can focus on grazing, but when the cheetah is observed to be within some distance then the gazelle would prefer living over die eating, and hence prioritize running away.
Steering behaviours :
Involves path determination for the given goal. i.e., where to move in order to achieve the needs of the agent. Towards the stimuli - cowboy, or Away from the stimuli - gazelle, randomly - gazelle (while grazing).
In our models, all this boils down to calculating a Steering force (vector) according to which the agent will move to a new position (as modelled in ). This steering force takes into account two things - the current velocity and the desired velocity (which is selected according to the agent’s goal) [Reynolds 1999].
Then the steering force is just the amount of force (and therefore acceleration) that should be applied in order to change our velocity from the current towards the desired.
Locomotion:
Involves the embodiment of the agent itself - physical appearance, visualisation of steering behaviour, animation. Eg, the cowboy’s instruction to speed up can either be implemented to the horse by dynamic joints and muscles which will go from walking to galloping - simulating realistically the legged motion.
Or one can also simply animate a sequence of frames to provide the illusion of motion.
But since the focus of this section is on autonomy itself, as long as the viewer is able to observe and agree that the motion (i.e., the change in position each frame) is perceived to be autonomous it does not matter what the actual mechanics of that motion is.
Therefore, we will - as done till now - work with a circle, as an abstracted embodiment of our agents.
First we shall implement the overall characteristics common to all agents of different motion behaviour by an abstract class AutonMover - which itself doesn’t contain a specific motion behaviour.
Situatedness and Embodiment have already been built into our simulation. The agent becomes situated as soon as we define our canvas and initiate our agent in setup() as such:
agent = new AutonMover(x, y, r)
Embodiment follows from the properties assigned to the agent at construction - the size, initial velocity, acceleration and position of the agent.
class AutonMover {
constructor(x, y, r) {
this.r = r;
this.D = r * 2; //Diameter of the particle - used in defining shapes
this.pos = createVector(x, y);
this.vel = p5.Vector.random2D();
this.acc = createVector(0, 0);
...
}
...
}
In order to be a reactive agent, it needs to both perceive parts of the environment (take in input) and also have a reaction process according to its input (produce output). This will be done through methods of the class. For eg:
seek( target ) {
Do some calculations
return output
}
avoid( [objects] ) {
...
}
Where ‘target’ and ‘[objects]’ are inputs that the agent takes from the environment1. And the output is the calculated reaction - which in our case is a steering force.
Now since, all our simulations of autonomous agents, will not handle gravity, friction and other external forces. The only part that concerns the use of physical laws is the calculation of new positions. Therefore, to simplify our code, let’s assume that
Which changes the calculation of acceleration in applyForce() to just adding the force to acceleration.
applyForce(force) {
this.acc.add(force);
}
This also implies, in order to calculate an acceleration to change our velocity from current to desired, is same as calculating the steering force, therefore:
steering force = desired velocity - current velocity
The above equation is the common for all agents regardless of how the desired velocity is calculated (i.e., what behaviour we are dealing with), so we shall implement a method steer() in AutonMover that calculates and applies the steering force:
steer(){
let steer = p5.Vector.sub(this.desired_vel, this.vel);
steer.limit(this.maxForce);
this.applyForce(steer);
}
Action Selection: Is implemented through classes. Since action selection is the base guide for differentiating between motion behaviours, for each kind of behaviour, we will take all the common features of AA from the AutonMover class and extend on it including method(s) unique to the class for implementing the specific behaviour2.
The above notion of extension is implemented in JS as Class Inheritance, the syntax:
Behaviour1_Agent extends AutonMover{
...
behaviour(parm1, param2, ...){
}
...
}
Now when calling upon the properties of Behaviour1_Agent, one can also call the properties of the parent class AutonMover, with the same syntax.
i.e., when we call Behaviour1_Agent.pos, the compiler will first look for pos defined in the current class and if not found, it will look for it in the parent class, and return that value.
Steering behaviour: Is implemented as the steer() function mentioned above. Also since desired velocity is a part of our agents that is constantly used and calculated, we will make it a property - desired_vel initialized during construction in AutonMover.
Locomotion: Includes all things implemented for embodiment characteristic for visualisation of our agent in our environment. One further visual element that would be useful is incorporation of current velocity in the circle, i.e., emphasis on the current direction. This can be implemented in several ways with the help of push() and pop() functions3. For this we shall implement a dynamic mouth animation - which shall open and close in a rhythmic oscillatory fashion.
display(dinstingDirection = false, mouthSize = PI / 10) {
if (dinstingDirection) {
push();
fill(100);
translate(this.pos.x, this.pos.y);
rotate(this.vel.heading());
let lowerLip = mouthSize / 2 * sin(frameCount * 0.1) + mouthSize / 2;
arc(0, 0, this.D, this.D, lowerLip, TWO_PI - lowerLip, PIE);
}
else {
//Draw a circle at the current location (x,y) with radius r = D/2
}
The implementation idea is obvious after one observes that arc() parameters are, sin() function’s output with the input as frameCount and scale it to
Other considerations of locomotion are also the physical limitations that we would like the agent to exhibit, similar to how it would in real life. Since steering force is an internal property and not external, it would have a maximum possible value which depends on the mechanisms according to which we assume our agents to work with. For example if the mode of locomotion is a car - then the value would be limited according to the maximum energy produced by the engine and the sturdiness and turning gear of the tires. Else if it is an animal - then the value would be dependent on muscular strength and power.
In order to impose those qualities within our abstract implementation, we introduce the maxForce property - which limits the maximum magnitude, as seen in l2 of steer(). Similarly to avoid our velocity magnitude from taking extreme values and to set a base value for the desired velocity’s magnitude, we introduce the maxSpeed property. This concludes, all required properties of our AutonMover class,
class AutonMover {
constructor(x, y, r) {
this.r = r;
this.D = r * 2; //Diameter of the particle - used in defining shapes
this.pos = createVector(x, y);
this.vel = p5.Vector.random2D();
this.acc = createVector(0, 0);
this.desired_vel = createVector();
this.maxSpeed = 5;
this.maxForce = 0.125;
this.posHistory = [];
this.showHistory = true;
}
/* All the other methods like seek() and applyForce() etc., follows */
Note, the use of posHistory and showHistory will be explained in 4.3.4
As mentioned, all agents’ implementation is an extension of the AutonMover class, the extension being only the implementation of a particular behaviour.5 Therefore we shall only going forward mention those extended methods, and assume all other things the same, unless mentioned otherwise.
For a seeker:
Goal / Selected action: To chase a target object (stationary or moving)
Steering behaviour / Desired velocity calculation : towards the object at maxSpeed
Implementation:
seek(target){
this.desired_vel = p5.Vector.sub(target, this.pos);
let distance = this.desired_vel.mag();
this.steer();
}
Right now, although the seeker is aware of the position of its target, it doesn’t have a perception of depth/distance, and therefore it overshoots the target once it reaches. We can include this in our implementation by adding a distance check using distance.mag() and scale our desired velocity to slow down once close by target.6 We implement this by adding an extra boolean parameter to our seek function - arriving_bhv - if true, then we run extra extra code in seek() - within the if block.
seek(target, arrive = false){
this.desired_vel = p5.Vector.sub(target, this.pos);
let distance = this.desired_vel.mag();
// Added code
if (arrive && distance < 100){
let desiredMag = map(distance, 0, 100, 0, this.maxSpeed);
this.desired_vel.setMag(desiredMag);
}
else{this.desired_vel.setMag(this.maxSpeed)};
//End of added code
this.steer();
}
Now you have a simulation of an seeker that can choose to have depth-perception:
The above example also subtly hints on the relationship between the input to the agent and the output. i.e., the amount of sophistication the agents’ steering behaviour has is proportional to the amount of information the seeker can perceive and process from the canvas.
Selected action: Running away from pursuer/target
Steering behaviour: Desired velocity is pointing away from the target at maxSpeed - negative of desired in seeker.
Implementation:
evade(target){
this.desired_vel = p5.Vector.sub(this.pos, target);
let distance = this.desired_vel.mag();
this.desired_vel.setMag(this.maxSpeed);
return this.steer();
}
NOTE: The above snippet looks different, because it is from newer implementations of all agents, where steer doesn't apply the force, but just returns whatever force should be applied. This will be further explained in chapter 5.
Similar to the seeker, we implement a sense of ‘depth perception’ to the evader, by calculating the distance from the target, and slowing down if far enough - perceived safe - from the target, and speed up, faster than usual if close enough - perceived unsafe - from the target. Including this in our implementation, we have:
evade(target, safeAware = false){
...
if (safeAware){
if (distance < 150) {
let desiredMag = map(distance, 150, 0, this.maxSpeed, this.maxSpeed*2);
this.desired_vel.setMag(desiredMag);
return this.steer();
}
if (distance > 500) {
let desiredMag = map(distance, 600, 500, this.maxSpeed * 0.2, this.maxSpeed, true);
this.desired_vel.setMag(desiredMag);
return this.steer();
}
}
...
}
Therefore we have our simulation
Selected action:
Sustained-direction random walk6 - no erratic changes in direction of motion
Steering behaviour:
Dependent steering force - small (random) change to the steering force from previous frame
Implementation:
Seeking a randomly changing point on a circle - whose center lie ahead of the agent
Since, we are extending upon the seeking behaviour implemented above, we shall extend our Wanderer Class from Seeker. Now, when the Wanderer is initialised we need a few more properties to be defined. Therefore in Wanderer’s constructor, we want all of Seeker’s properties and add on new ones. Just like this refers to the current class, super directly refers to the parent class, therefore:
class Wanderer extends Seeker{
constructor(x,y,r){
super(x,y,r);
this.wanderRadius = 40;
this.predictionInterval = 100; // How far ahead to draw the circle
this.predictedPos = createVector(0,0); // Vector from current position to predicted position
// Since we are making random changes to the target position along a circle,
// we use the angle (b/w the target from the center and current direction) - and make small changes to it
// i.e., we work with polar coordinates
this.targetAngle=radians(random(0,360)); // Initialize to some random angle - same as random(0, TWO_PI)
this.displayWanderCircle = true;
}
...
}
where the unexplained properties have the obvious interpretation from their names. The calculation of target is implemented as calculateWandererTarget()7
calculateWanderTarget(){
this.predictedPos.set(p5.Vector.setMag(this.vel, this.predictionInterval));
this.predictedPos.add(this.pos);
this.targetAngle += random(-0.3,0.3);
this.#target.set(p5.Vector.fromAngle(this.targetAngle + this.vel.heading(),this.wanderRadius)); // target = (θ, r) is in polar coords
this.#target.add(this.predictedPos);
}
Now, because the update() in Seeker can already seek, all we have to do is input the internal target of the wanderer as the argument for target when calling like usual in draw().8
( - without the randomRadius lines)
Finally, for understanding the implementation, by explicitly drawing the mechanics of the wanderer ( if displayCircle = true ) we call the method displayCircle(),
displayCircle(){
//the circle is not filled with colour to avoid confusion with the actual mover
noFill();
line(this.pos.x,this.pos.y,this.predictedPos.x,this.predictedPos.y);
circle(this.predictedPos.x,this.predictedPos.y, this.wanderRadius*2);
line(this.predictedPos.x,this.predictedPos.y,this.#target.x,this.#target.y);
circle(this.#target.x,this.#target.y,5);
}
Therefore, updating our display() function to be:
display(dinstingDirection = false, mouthSize = PI / 10){
if(this.displayWanderCircle){
this.displayCircle();
}
super.display(dinstingDirection, mouthSize);
}
Implementing random Radius:
The wanderer’s motion for given parameters, though has sustained turning the circular movement seems to occur too often, hence its pattern of movement feels predictable relative to usual ‘random walks’ and the radius of this circular movement differs only by a small amount
A partial attempt at resolving this is picking a random radius of the target circle at every frame (i.e., introduce a conditional into draw()), but still constraining to a minimum and maximum value - maximum being the obvious choice since the circle shouldn't intersect with Wanderer (then what would it mean to target something inside you!) and lower bound being visible enough
wanderer.wanderRadius = randomRadius ? constrain(wanderer.wanderRadius+=random(-2,2), 5, wanderer.predictionInterval-wanderer.r)
: wanderer.wanderRadius;
The main reason we have implemented Wandering behaviour, instead of the other autonomous behaviours mentioned in [Reynolds 99] is the theme of randomness that we have explored until now. Then the question arises, that when all things considered most of the movers until this chapter possesses random movement, what makes Wanderer special?
Therefore as an attempt to answer this question, we will compare Normal walkers, Perlin walkers and Wanderers (in order of their visually ‘smooth’ movement). But in order to compare them, we need them to have comparable parameters and similar implementations. But as we’ve seen, the implementations of our random walkers vary according to their own context.
Therefore first, we generalise the structure of our walkers and rewrite the code containing these walkers according to the general mover model from 1.3.3 where, the update function is a modified version of the pseudo-code in 2.3.1 where
update(p1, p2, … , pn){
updateVelocity(applied quantity, p1, p2, … , pn)
new pos = 𝑓( velocity [, old pos] ) // i.e., new pos = old pos + velocity
checkEdges()
}
Where, we calculate velocity in updateVelocity() every frame, from the applied quantity which itself is either internally (which may take environmental factors into considerations). Then 𝑓 calculates the new position using Euler integration, i.e., and if we are outside our canvas, checkEdges, will position us back into the canvas (at the opposite end of where we went out from). Therefore by ensuring all three types of movers follow this structure, allows us to do two things:
widthheightp5.Vector.random2D() (initial value)Now, though we can make direct visual inferences about the change in a mover’s behaviour (i.e., path) when a parameter is changed, it would be better to have an exact visual representation of the path so that we remove the bias that might occur in observations, and also us also observe minor differences. But, we cannot go back to not calling background() every frame, because it causes clutter in our simulation, especially when it runs for long periods of time. Therefore we create a new property posHistory array in our AutonMover class , in which we store updateHistory(), which adds the current position every frame, and removes if we are above the limit
class GenericMover(){
display(){
if (this.showHistory) {
for (let i = 0; i < this.posHistory.length-1; i++) {
let prevPos = this.posHistory[i];
let currPos = this.posHistory[i + 1];
// Check if the Mover has jumped edges, and continue the loop and don't draw the line
let posChange = p5.Vector.dist(prevPos, currPos);
if (posChange >= windowWidth || posChange >= windowHeight) {
continue;
}
line(prevPos.x, prevPos.y, currPos.x, currPos.y);
}
//Usual things to draw in display
}
updatePosHistory(){
// Limiting the maximum number of elements so that the array doesn't grow indefinitely
// Array.shift() removes the 1st element, which is the oldest position visited by the walker
this.posHistory.push(this.pos.copy());
if (this.posHistory.length > maxArrayLength) { // The max limit can be modified - could be added as a parameter
this.posHistory.shift();
}
}
update(){
...
this.updatePosHistory()
...
}
}
And we also add the boolean property showHistory to toggle drawing the path - which is set true for all our movers in this section. Therefore, now the only difference between each of these movers is the implementation of updateVelocity() and the parameters that affect it.
(Note though, there are other extra functionality that has been built into each of these movers made for their context, that has been left unchanged from previous implementations - but these do not affect the movement in any way)
So, below are the visual observations of each of these movers. This is meant to be read with their particular implementations (so please check this branch of the GitHub repo), based on set parameters (the observations are made in comparison to each other, where the Wanderer is considered as the baseline, and few observations are also made w.r.t, their own parameters)
NormalWalker:
updateVelocity = setAngle()
And update = step() where,
setAngle(sigma = PI/8, mu = this.vel.heading(), changeRate = 1, changeColor = false){
// Function to set the direction angle of the walker's velocity
// changeRate - rate at which we change the angle in terms of frames
// changeColor - Whether walker's color changes when the direction changes significantly (>2 SDs)
if (frameCount % changeRate === 0) {
// randomGaussian() returns a random sample from a N(0,1)
// Therefore by performing scaling and translation of the distribution we have
// the directionAngle distributed according to N(current direction,π/8) or N(μ,σ)
this.directionAngle = randomGaussian() * sigma + mu; //σ and μ were picked experimentally
// this.drectionAngle = randomGaussian(mu, sigma);
// Changing color if change in direction is more than 2 SDs
if ((changeColor) && abs(this.directionAngle - mu) >= 2*sigma) {
this.col = color(random(255), random(255), random(255)); // color is set by choosing random RGB values
}
}
}
and step() follows the general pattern implemented, with
step(vel_mag=3,chk_edges=false) {
this.updatePosHistory(1000);
this.setAngle(PI/8, this.vel.heading(), 1, false);
this.vel.setHeading(this.directionAngle);
// v.setHeading(direction) is same as v = v.mag() * (cos(directionAngle), sin(directionAngle))
this.vel.setMag(vel_mag);
this.pos.add(this.vel);
if (chk_edges){this.checkEdges();}
}
(The simulation show in chapter 2 is sufficient for the following observations. But an implementation following the above model can be found here )
Observations
Visually similar to an ant - movement is a bit jagged, almost as if to visit and get a sense of all the neighbouring areas of current position.
Jagged motion is due to the fact that though the mean is the current direction, since there is no correlation between the previous angle and current angle, polarising direction could be picked in consecutive frames.
There are also immediate revisits in the form of small loops, which can be attributed to the independent ‘randomness’ of the angle value. So suppose a large value towards the left is picked once, and independently again a left-value is picked (and so on - but not necessarily in succession but in close frames) then as there are 60 frames in a second, the loop occurs.
Both the jerky and small loops can be largely eliminated if the SD is set to a very small fraction of pi. This also causes loops to almost never occur, which may be undesirable if we want unpredictability and/or some revisiting to occur locally in time.
PerlinWalker
updateVelocity = static noisyVelocity()
Extended functionality to add noise to even step size, directly into step() with parameters:
noisyStepSize : toggle for adding noise to step size, default = truerelativeMaxStepSize : decides the maximum of the range of noise added, default NOTE: setting the slider value to noisyStepSize to false
Observations:
Visually similar to the wanderer, but behaves more unpredictably, almost like it is juking and avoiding something
That is, a lot less immediate revisits and jerkiness compared to the normal walker. Loops occur, but the interval between and its size vary a lot. The motion is always smooth, even when the looping size is small.
All of the above above observations can be attributed to two qualities of the implementation:
The highly correlated output given by PNoise algorithm
The magnitude scaling to a constant, even though the real output varies anywhere within the unit square.
So when PNoise returns smaller values of both x and y coordinate, the change gets magnified by the magnitude scaling and therefore extremely small loops occur. But, when larger values are returned in both coordinates, then the change gets diminished by the scaling.
This added to the fact that we make sure the x and y coordinates are independent of each other - by taking a large offset in their input values, causes the highly unpredictable yet smooth motion.
If less smoothness is desired, one can simply cause the change in input value to increase more each frame, hence reducing the correlation between each frame’s position
Wanderer
updateVelocity = calculateWanderTarget() + steer()
Implementation is the same as given in the previous subsection.
Observations:
Visually similar to a police vehicle patrolling a particular open area. Revisiting occurs in loops that are much closer to circles than ellipses as seen in the other movers, and the loops are never small, therefore each revisit feels more intentional than happenstance, because of its random nature. Loops also occur the most often, with other curved/straight paths being rare.
The movement therefore presents itself as extremely smooth and circular - like a double compass where the pivot is changed at random intervals of time
First it's important to note that these observations align with the design choices and model intentions6 of a long-term order (in terms of looping behaviour) and sustained turning [Reynolds 99]. Both the intended goal and the observations can be related with:
Constraining the seek target onto a circle and changing its position randomly on it. This causes the direction to stay constant much longer, as the position of the target is dependent on its previous position and though there is no correlation in the direction the target moves, since half of the circle causes the mover to move in a single direction, sustained turns are created
The limitation of the steering force inherited from the Seeker plays the main role in the lack of smaller loops, as the Wanderer is never moving to its desired destination as ‘fast as possible’, therefore making the sustained turning more pronounced.
If one desires to have a Wanderer that is less circular/revisiting, then increasing the amount of change of the target, or the predictionInterval is required (reducing correlation similar to PerlinWalker). In order to reduce the long-term order that we see, we would have to introduce more randomness (one that has less correlation) and I found that randomizing the radius works the best, as seen in the previous subsection.
Concluding thoughts of this section:
From the above observations, it is clear that if one had access only to the simulation output and not the implementation or design of model, it would be hard to call only one of them autonomous, while the others are not. It would seem to the viewer as if all are autonomous or all of them are not, based on their perspective.
For one might think that it is absurd to call ‘roam around’ or ‘walk randomly’ a goal at all in the first place. One could argue that it is indeed a goal when you introduce ‘sustained turning’ for the Wanderer. But one could also equally argue that the introduction of ‘be unpredictable’ or ‘change direction to avoid being caught by a predator’ makes it a goal for PerlinWalker. Such ambiguity could be resolved (or attempted to) by then looking at the model itself and intentions behind each choice.
Then, one could say that there was no intention in the implementation of PerlinWalker beyond providing a smooth random walker, while the clear intentions of design for the Wanderer (which is definitely achieved) has directly affected the implementation - where we use desired velocity and steering forces in order to drive the motion of the mover.
But, one could draw equivalences between the algorithm for PerlinWalker and that of Wanderer, or even make them similar without introducing new design intentions, and there is still the problem of the uninformed viewer. Therefore, there must be more work done in regards to the definition of an autonomous agent, to clearly distinguish between each case mentioned here.
Therefore I end this section by claiming that classification of wanderer is an edge case, and including that may have us include perlin noise as autonomous well - is an important observation. Yes, as expected one should be able to go back from visuals to intended outcomes and check what is going on. All arbitrary variable choices in this proejct are in essence an outcome of this effect
This also implicitly implies that we only give limited information about the environment to the agent. This is an intentional design choice because, providing too much information is redundant, and is also unrealistic in nature - no being would have unlimited perception of its surroundings - then its choices would be according to a global optimum, rather than a localised behaviour.
This does impose one limitation of OOP in JS, which is to define an agent with 2 behaviours one cannot simply give it the properties of two classes implementing them separately. \n
We would simply have to implement a new class that contains both the properties - which is extra code doing the same thing - violating the DRY principle of programming \n
To combat this we can define the individual behaviours from the class implementing both - though this reduces code length - it is unintuitive in approach.
We could technically contain all of autonomous agent implementation just inside AutonMover - use callback functions to implement specific behaviours - i.e., create functions - for particular behaviours - and pass them into AutonMover. But there is a line to be drawn between understandable and compact, which is beyond the scope of this thesis.
We could also increase speed as we initially get closer - visually stimulating an agent that gets ‘excited’ as it observes the closeness of its target
Wanderer was implemented first - observations made - then compared the design choices of [Reynolds 99] with observations - asserting the action selection / ‘goal’ set there was achieved. This is further explained in
We can get the target although it's private, because we defined a getter method for it within the Wanderer class.
The implementation of the wanderer ideally should be self contained within the class, but in the given code it is incomplete without the actual sketch. Because the randomisation of radius of the target circle is implemented globally, although it can very much be implemented within the class