The development of realistic automated entities to take part in wargames is challenging because of their complexity, in terms of having to operate in an environment with a large state space, with uncertainty due to a limited visibility of the opponents manoeuvres and capabilities.
The rise of such agents in a variety of games played against humans offers a possible solution. In particular, real-time strategy RTS games are a platform in which the coordination of heterogeneous combat units to defeat an opponent is a prominent feature.
For example, AI trained using multi-agent Reinforcement Learning RL has been developed for the game Starcraft and has been ranked as better than Earlier examples, documented in the review by Robertson and Watson , include other approaches based on RL, in addition to Monte Carlo Tree Search and Case-Based planning techniques. The application of evolutionary algorithms to the challenge of creating automated opponents for RTS games has also shown some promise.
Evolutionary algorithms are in the family of metaheuristic techniques that has seen wide use in operations research Gupta et al. Recently, their use combined with simulation-based evaluation, termed simheuristics, has seen an increase of activity Juan et al.
Our evolutionary approach offers the potential benefit of explainability over the current state of the art approaches, not so much due to the evolution, but due to the behaviour tree construct that is evolved. It is virtually impossible for a human to discern the behaviour resulting from these weights, hence a neural network is typically seen as a black-box approach, offering an output in response to an input without offering a rationale for that output.
Strategies discovered through Monte Carlo based techniques are also difficult for a human to understand, as that technique relies on mass-scale simulation, typically against a random opponent and using statistical properties of winning to determine the next action, rather than trying to capture the logic for those actions.
Behaviour trees on the other hand, are a structure that was originally designed for human readability, as they were intended for game designers to encode the behavioural logic for computer games Isla, Thus, for a chosen behaviour of a particular unit, it is possible to trace back manually through the behaviour tree and identify the decisions that were made in order to choose that behaviour.
In this paper, which is an extended version of the work presented in Masek et al. The key to our approach is the composition of a small set of possible actions into a behaviour tree structure that can function as a controller for the Blue Team entities in a wargame simulation. Rather than use an explicit set of rules to compose the possible actions into a high level behaviour, we allow this high level behaviour to emerge through the process of evolution, driven by simulation-based evaluation.
Building on from our initial work in Masek et al. In particular, a novel means of using the tree in the agent model to determine the action taken is introduced in Sect. This approach, which allows for the evolution of timing into the behaviour tree, was able to evolve tactics that realise complex collaborative agent behaviour from a small set of coarse primitive actions. Full-scale realistic scenarios are not suitable for this purpose, as their complexity often means that multiple strategies can be employed and it is uncertain which one is the best.
Our design of a set of small-scale scenarios was guided by the work of Hullett and Whitehead , who documented a set of game level design patterns for first person shooter computer games. Results using the design patterns of an open area and a choke point presented in Masek et al.
Additionally, in this paper we introduce a scenario based on the stronghold design pattern and use it to explore differing tactics that arise when the Blue Team has partial visibility as opposed to full visibility of the scenario.
Two, more complex, scenarios are also evaluated, Urban Warfare, which was also presented in Masek et al. In addition to the expanded set of scenarios, we also compare our approach to a set of existing techniques and analyse our approach in terms of computational resources required.
The rest of this paper is organised as follows: In Sect. Section 3 contains a description of our approach of using genetic programming to evolve behaviour trees that are evaluated through an agent-based simulation.
Section 4 contains a description of the experiments and results, followed by the conclusion in Sect. Evolutionary algorithms are population-based search and optimisation algorithms, using a set of solutions that evolve over a number of generations.
They can scale to large, complex search spaces and can cope with discontinuities and local optima in the fitness landscape. This gives evolutionary algorithms an advantage in warfare scenarios where a number of different strategies can lead to success, but the success of a strategy will change based on the opponent.
In particular, there is an element of intransitivity in the ranking of strategies preventing a simple linear ranking. The complexity of warfare scenarios presents difficulty in finding an objective measure of fitness for a solution, and in that respect simulation-based evaluation is typically favoured.
It must be acknowledged though that this shifts the burden from having to determine an objective fitness measure into having to decide which aspects of warfare are simulated and the fidelity of this simulation, and the problems of dealing with uncertainty persist. Behaviour trees are data structures where nodes of the tree represent condition checks and actions of an in-game entity typically agent-based.
They are widely used in the creation of hand-coded game AI. Complex behaviours can be built by arranging simple control flow and action nodes in a tree structure, which is traversed depth first to determine which actions to execute.
The behaviour tree provides flexibility as it lends itself to a hierarchical structure, where behaviour trees corresponding to simple behaviours can be used as sub-trees in a larger behaviour tree that implements a more comprehensive higher level behaviour.
The transparent nature of behaviour trees, in that the decision points leading to executed actions are distinctly identifiable in the tree, lend them to applications in behaviour modelling for wargaming and RTS games.
For example, the Norwegian army is using behaviour trees that have been manually derived to model behaviour for combat units Evensen et al. There is limited existing work on the use of evolutionary algorithms to produce behaviour trees. Perez et al. They used a constrained tree structure composed of behaviour blocks and remarked that behaviour trees are a good solution for combining a variety of different behaviours e. A less constrained application, and one more relevant to combat, Berthling-Hansen et al.
They were able to evolve follower behaviour for a simulated soldier in a 3D environment based on the primitive behaviours of checking distance to a target and moving towards it. There also exist other approaches where behaviour trees are evolved specific to lower level goals and are then used within a higher level controller to generate a larger behaviour tree that represents a more complex behaviour.
An example is the work of Lim et al. In their work, evolution was used to separately create a number of behaviour trees, one for each task in the game. These trees were then used in a manually designed game playing agent. Similarly, Hoff and Christensen used genetic programming to generate behaviour trees for individual aspects of the game Zero-K, which they then combined together to produce a player agent.
Our approach aims to reach further, to produce a complete AI approach for addressing tactical combat scenarios through behaviour trees evolved so as to determine the appropriate actions for a set of diverse units from the beginning of the scenario till its end.
The components and steps of our approach are presented in Fig. Input consists of a set of condition and action node types that can be evolved into a behaviour tree, a fitness function, and a scenario in which solutions will be simulated. The behaviour trees and fitness function are used for transforming the simulation-based outcomes into a single measure of fitness. Reproduced from Masek et al. Provided with the input of the Behaviour Tree primitive node set, fitness function and scenario.
The evolutionary algorithm 1 creates an initial population and then iterates between 2 evaluating the current population and 3 producing the next generation. Evaluation is performed by a simulation engine, in our case MicroRTS. At each generation, the fittest individual serves as a candidate solution.
The evolution component of the approach starts by generating an initial population and iterating between evaluation and production of the next generation. Evaluation is performed by sending each individual of the population, in turn, to the evaluator. The evaluator instantiates the scenario and uses the individual as the behaviour tree controller for the Blue Team entities. Once a scenario is complete, measures of effectiveness, such as units remaining and their status, are sent back to calculate fitness using the fitness function.
Production of the next generation is through the use of a selection scheme to choose candidates from the previous generation. Each candidate may go into the population of the next generation unchanged, or based on a probability, undergo the genetic operators of mutation or crossover.
The specific details of the operators used in our experiments will be discussed in the next section. As the evolutionary process is an iterative search it may continue for an indeterminate number of generations though a stopping criteria may be imposed.
At each generation, output is produced in the form of a candidate set of solutions the current population whose behaviour can be manually examined. Typically this examination is performed on the individual with the highest fitness. MicroRTS is a simple real time strategy game initially designed for testing theoretical ideas in relation to AI research.
MicroRTS is played on a game board divided into a grid of square tiles. Tiles are either accessible or inaccessible. A game takes place over a set number of time units we have used and in our experiments. These time units are abstract and do not correspond to real time.
The game time can be slowed down so that a human can participate or visualise a game, but for use in AI training MicroRTS can run games as fast as the processing hardware allows. We have used four of the unit types available within MicroRTS, and their capabilities are summarised in Table 1. Each unit type has a certain number of Hit Points, an integer value corresponding to the amount of damage the unit can receive before being destroyed.
The Base Unit is immobile and lacks any attack capability. We have used the Base in our experiments as a point for other units to attack or defend. We have used three mobile combat unit types: Light as a proxy for infantry , Heavy proxy for armor and Ranged proxy for artillery. These units can move from one tile to the next in a 4-connected fashion i. The Heavy Unit is the slowest, taking 12 time units to move between tiles, with the Light Unit the fastest at 8 time units per move.
The Heavy and Light have an attack range of 1, meaning they can only attack units in adjacent 4-connected tiles. The Ranged unit has an attack range of 3 tiles, and has the ability to attack diagonally. The time needed to perform an attack is the same for each unit type, at 5 time units. The amount of damage inflicted by a successful attack is between 1 and 4 hit points, depending on unit type.
In our experiments we did not make use of MicroRTS units that can produce other units or gather resources common gameplay mechanics found in RTS games so as to constrain the problem space to one closely related to military wargaming. For the Blue Team, we utilise a behaviour tree controller for each agent, and this controller is evolved in our approach. Behaviour trees are modular, goal orientated models, which are hierarchical in nature.
A behaviour tree is constructed from a set of nodes. The primitive set of behaviour tree nodes that we have used in our experiments is listed in Table 2. The terminal or leaf nodes correspond to the input set of condition and action nodes and are specific to the domain ie.
The non-terminal nodes in a behaviour tree act to provide a logical structure for the terminal nodes and are generic. We use two standard non-terminal nodes, the selector node, which attempts to execute each of its child nodes in order, until one succeeds, or until they are all visited, and the sequence node, which attempts to execute all children in order until one fails or all are visited. In implementing our behaviour tree model we experimented with three different ways of traversing ticking the behaviour tree in order to determine the action a unit is ordered to perform.
The first is a conventional approach, and the two others are novel, specifically formulated to encourage more complex emergent behaviour from the evolved behaviour tree. The action of moving a unit to a particular position, which cannot be done instantaneously. Actions that can be completed immediately e. This would mean, in a conventionally ticked behaviour tree, that traversal of the tree would continue and if another subsequent action node was encountered, that action would override previous actions.
Per node approach In this approach, each time the behaviour tree is queried, a single node is ticked, with subsequent queries of the tree ticking subsequent nodes. In this approach, rather than an explicit pause node, the pause is done for every node. This approach, through evolution, encourages behaviour that needs to occur earlier in the scenario to be present closer to the top of the behaviour tree. In each case, the time interval between when the behaviour tree is queried to determine the next action was 10 game time units where a scenario was typically run for a duration of game time units.
A set of experiments was run to determine which approach to ticking the behaviour tree produced suitable results through evolution. Experiments were conducted in the Plain Terrain scenario, this scenario will be described in more detail in Sect.
Twelve experimental runs of evolution were conducted with each tree ticking approach. Each experimental run consisted of running the evolutionary process for 20, generations, with each run having a different setting for the probability of mutation and crossover.
Mutation probability was set to one of: [0. For each experimental run, the best solution in the final generation, and also the best and average fitness across the generations were examined to determine a suitable tree ticking scheme.
In examining the solutions resulting from the experimental runs, the Conventional Approach failed to generate a winning set of tactics through evolution. In all twelve of the Conventional Approach experimental runs, the fittest solution at the end of the evolutionary process resulted in a defeat loss of all Blue Units and Base. In the Pause Node approach, eight out of the twelve experiments resulted in the fittest solution from the last generation winning no Red Units or Bases remaining , with the remaining four losing.
For the Per Node approach, eight experiments resulted in a winning solution in the final generation, four resulted in a draw both red and Blue Units were remaining and there were no losses. The progress of the evolution for these comparison experiments is shown in Fig.
This figure shows the average of the best solutions in each generation for each of the three approaches. This shows the clear dominance of the Per Node approach in evolving a solution with higher fitness sooner, thus this approach has been utilised in the subsequent experiments that are reported in this paper.
A plot of the average best fitness in each generation for the three tree query approaches. Each point is the average of 12 fitness values, corresponding to the fittest individuals in each experiment for a particular generation.
The scenarios themselves are described in detail in Sect. In our approach we utilised genetic programming, an evolutionary algorithm first proposed by Koza In genetic programming, individuals take the form of trees, with the standard genetic operators such as crossover and mutation adapted into versions that work specifically with tree structures.
This makes genetic programming a suitable approach for behaviour tree evolution, using the primitive node set, and evolving tree structures using it. The specific operators and settings used in our implementation are discussed. The initial population is randomly generated using two methods, grow and full, with each method used to create half of the population.
Each method produces trees of a specified maximum depth using two steps: 1. Growing the tree using non-terminal nodes until depth reaches one level less than the intended tree depth, and 2.
Add a layer of terminal nodes. The grow method constructs the tree randomly from the root, adding non-terminal nodes until one reaches the intended tree depth — 1, whilst the full method ensures all branches have non-terminal nodes up to the intended depth — 1.
An example showing these two steps for the grow and full method to generate trees at depth of 4 and node arity 2 is shown in Fig. For our experiments we used a minimum and maximum tree depth of two and six respectively, with arity of nodes number of children between 1 and 4.
On the other hand this implies an increase in the search space which would necessitate either a larger population size, more generations, or both to allow the algorithm to find an optimal solution. Example of tree growth using the Grow and Full methods.
Here the desired depth of the tree is 4 and the arity of each node is 2. Our fitness function. To keep the fitness as a positive number, we add the constant maxHP , which are the hit points of the team that starts with the largest number of hit points. The hit points of a team are calculated by summing the hit points of each unit on that team.
Damage during the scenario is modelled as a loss of hit points, where zero hit points results in the unit removed from the scenario. A good solution under this fitness function corresponds to a high fitness value, minimising blue losses ie.
Gajurel et al. Depending on the scenario, other fitness functions may be more appropriate. Selecting individuals from the current generation for inclusion in the next generation is based on the fitness measure.
The SUS scheme works in such a way that all individuals regardless of fitness have a chance to be selected into the subsequent generation, with higher fitness individuals having a greater chance, however SUS is also designed so that the fittest solutions cannot be chosen exclusively without less fit solutions also being represented. It is important for low fitness solution to have a chance of being selected, especially early on in the process of evolution, to give the process time to explore the search space, rather than converging early which risks converging to a local optimum that happens to be near an individual in the initial population, where a better optimum might exist in an unexplored part of the search space.
SUS is a standard selection scheme in evolutionary computation in general, and not particular to genetic programming. Once an individual is selected for the next generation it has a chance, based on the mutation probability, to undergo mutation which applies a change to the individual so as to explore the solution space.
Since our individuals are represented as trees, a mutation scheme especially designed for this type of representation needed to be chosen in order for the resulting mutated trees to remain valid. We have used Subtree mutation, a standard tree mutation operator, where a random node on the existing tree is chosen and the subtree of that node is replaced with a randomly generated subtree.
The new sub-tree was generated using the grow method, with the same constraints on minimum and maximum depth for the complete tree as discussed in the initialisation section. This has the effect of replacing a tree branch with a new random sub-tree, or if applied to the root note the entire tree will be a new individual.
Crossover, which aims to exploit aspects of existing solutions by exchanging parts of two individuals, is applied, with each individual introduced into the new population having a probability of undergoing crossover.
We used single point subtree crossover, a standard crossover scheme for tree structures, where two individuals are first chosen using SUS. A random point is then chosen on each individual tree and the two sub-trees from those nodes are swapped over.
In the extreme case, where crossover occurs at the root of individual A and a branch from individual B, the branch will become a new tree of its own, replacing A, while the entire tree of A takes the place of the branch on B.
This section provides a description of the experiments that were undertaken to evaluate the approach and the results produced. Firstly, a set of scenarios are introduced. This section is concluded with a discussion of computational requirements and scalability. The purpose of using these scenarios is to assess whether the optimal behaviour recommended for such scenarios can be evolved into the behaviour tree. Increasing the grid size allows for larger, more complex, scenarios to be modelled, but increases the simulation time needed in order to allow extra time for units to traverse the larger terrain.
The scenarios will now be detailed. The scenario is shown in Fig. Each side has a Base in opposite corners of the map. The capabilities of the two sides are summarised in Table 3. Since the only cover provided in this scenario are the units themselves, the movement of the Blue Units needs to be heavily coordinated.
This scenario uses a section of non-traversable terrain coloured in green to separate the grid into two areas with a narrow path the choke point between them.
Whilst being non-traversable the terrain does not block visibility, nor the capability to shoot over it assuming the target is within shooting range. The capabilities of the Blue and Red Teams in this scenario are listed in Table 4. In this situation, when blue is outnumbered by Red, the optimal strategy, as indicated in Hullett and Whitehead , is for Blue to use the narrow choke point to reduce the power of the larger Red Team.
Due to the narrow path and the surrounding terrain, the size of the team that red is able to mount against blue is constrained. It is a special case of the arena design pattern, with a limited number entrances which may be choke points. This set of two scenarios differing by the size of the choke point entrances , depicted in Fig.
The central area acts as a stronghold, surrounded by impassable terrain, apart from two entrance points near opposite corners of the map. To reach the stronghold, the red attackers must traverse a narrow, winding path choke point. The Red Team consists of 14 Heavy Units. The capabilities of the two sides are summarised in Table 5. This scenario explores the behaviour that emerges for the blue side in a stronghold environment with ambush possibilities for both blue and red.
Blue could let itself be distracted by attackers coming from one entry point, leaving the Base vulnerable to ambush from the other entry point. Blue could ambush Red Units as they pass through the path leading to the stronghold, targeting those using Ranged Blue Units. This stronghold scenario also has been used to compare full visibility for blue versus partial visibility for blue, with the sight radius of units indicated in Table 5.
The Urban Warfare scenario Fig. The capabilities of the two sides are shown in Table 6. The blue units are placed at the bottom of the map. The scenario includes a large Red Base in the top right-hand corner, providing a large incentive if destroyed.
A number of red combat units are placed in the bottom right-hand corner. An alternate, safer, path to the large Red Base exists on the left-hand side of the map, an example of the Flanking Route design pattern from Hullett and Whitehead Urban Warfare scenario, shown un-annotated top for clarity, and with annotation bottom to indicate role of unit placement and environment.
A defence oriented AI is used for the Red Units, which remain at their position until a Blue Unit is within a certain distance. Because of the large map size, the maximum number of time units per simulation has been increased to from as in the other experiments. Capabilities or blue and red are summarised in Table 7. Land Warfare scenario. Here impassable terrain green has been used to depict a river, with crossing points indicated. Interesting features include choice of river crossing choke points.
Two protected red snipers Ranged units protected by terrain at one river crossing along with larger heterogeneous teams for both blue and red. For each scenario, multiple evolution runs were undertaken, from an initial population of solutions, which was obtained by random initialisation.
Experiments were performed with varying mutation and crossover probabilities. This resulted in twelve combinations of settings, or twelve experimental runs of evolution for each scenario. For the Land Warfare scenario only six of these were completed due to computation time requirements.
In each run, evolution occurred for 20, generations in the case of all scenarios except Urban and Land Warfare, where it was reduced to 10, generations to decrease computation requirements.
The Per Node tree traversal scheme was used. The fittest solution found in each run was subsequently examined to analyse the behaviour exhibited by the Blue Team agents. The tactics employed by the Red Team were static in each scenario. The Plain Terrain scenario offers no accessibility obstructions for the agents on the map. The open space can put the Blue Team at a tactical disadvantage by potentially allowing the larger Red Team to surround the smaller Blue Team.
In all of the fittest solutions from each experiment, one example shown in Fig. This is accomplished by the Blue Team initial moving to the right, whereas the Red Team start moving to the left. This is somewhat an artefact of the movement scheme, as units move in a 4-connected fashion and are unable to move diagonally.
The Blue Ranged unit then stays close to the wall, preventing it from being surrounded. Whilst the Blue Heavy protects the Blue Ranged, it constantly moves back and forth to prevent being targeted by the adjacent Red Units.
Progress of one strategy through the Plain Terrain scenario, with the initial positioning shown in a , the Blue Team manoeuvre around the Red Units and proceeds to destroy the Red Base b , after which the Blue Heavy provides cover for the Blue Ranged as it destroys the Red Units c and d. The Plain Terrain set of experiments found the pairing of the Heavy Unit, with the Ranged Unit to be an effective team composition if the units work together. In the initial generations, the two Blue Units tended to act independently and were quickly eliminated.
Solutions where the two Blue Units travelled in proximity, with the shielding of the Red Ranged attacker by the Blue Heavy Unit proved superior. The approach was successful in finding a solution to the Plain Terrain scenario in most of the evolution runs. Of the twelve runs of the experiment, eight resulted in the entire Red Team being eliminated, and four resulted in a situation where each team had surviving units at the end of the simulation time.
From the Choke Point scenario experiments, a typical evolved behaviour for Blue was for the Heavy Unit to enter the choke point, blocking any advancing Red Team units, shown in Fig.
This action protects the blue Ranged Unit and allows it to target Red Units that come within range, seen in Fig. The Heavy unit then emerges from the choke point, using either the left wall or the inaccessible terrain to prevent itself being surrounded, seen in Fig.
The Blue Heavy Unit then proceeds, similar to the behaviour seen in the plain terrain, to move and distract the Red Team, whilst shielding the blue Ranged Unit. Other tactics that emerged in this scenario was for the Blue Ranged Unit to attack over the impassable wall, seen in Fig.
Blue Team tactics evolved in the Choke Point scenario, including a the Blue Heavy first blocking the chokepoint entrance, followed by b the Blue Ranged attacking over the Blue Heavy. Other tactics seen included c the Blue Heavy using the left boundary or impassable terrain for protection, and d Blue Ranged shooting over impassable terrain. Similarly as was found in the Plain Terrain scenario, in the first generations, before a strong solution could be evolved, the Blue Units acted without showing any coordination and were quickly defeated.
Of the sixteen experiments, fifteen resulted in a win for the Blue Team, including experiments where blue remained unharmed and one resulted in a loss. Heavy units have a sight radius of two units and Ranged Units a sight radius of three tiles. The dominant tactic that was evolved in these experiments consisted of the Blue Units staying in a tight formation, remaining close to the Base, seen in Fig. The Blue Units engage as a team, with the Ranged Units doing the majority of the damage, example shown in Fig.
Blue tactics evolved in the Stronghold scenario with limited visibility, including a Blue units remaining near the blue base and b the Blue Ranged and Heavy units working together. This experiment further demonstrated the capable team composition of the Heavy and Ranged Units. Due to the high frequency of red attackers in this scenario, one of the blue Heavy and blue Ranged were eliminated early in most cases. The remaining blue Heavy and Ranged were still able to eliminate the majority of the Red Team and survive.
The approach was successful in finding a solution to the stronghold scenario with partial visibility. All experiments resulted in a win for the Blue Team. The Stronghold scenario with full visibility consists of the same scenario as for partial visibility, but the Blue Units have full vision as with all other scenarios that were experimented with.
The dominant tactic involves the Blue Units staying in a tight formation and aggressively attacking the Red Units, an example shown in Fig. The Blue Units engage as a team, with the Ranged Units doing the majority of the damage. Tactics evolved for the Blue Team in the Stronghold scenario with full visibility. Example of Red units being engaged by a grouping of Blue Units shown in a , and Blue Ranged using the terrain to their advantage by firing over it whilst safe from attack in b.
This experiment showed less of the previously dominant tactic where the Heavy's protected the Ranged Units and demonstrated that the Blue Units aggressively attacked the red team. This was done before the Red team could group up and outnumber the Blue team.
The Ranged Units also took advantage of their ability to attack over impassable terrain where they were safe from the enemy, seen in Fig. These experiments saw the Blue Units using the terrain to their advantage to defeat a larger opposing team. The algorithm was successful in finding a solution to the Stronghold scenario. All experiment runs resulted in a win for the Blue Team. In the Urban Warfare scenario, the prevalent tactic that evolved in experiments involves the Blue Units initially staying in a tight formation at the start of the sniper path, but out of range of the snipers, waiting for the mobile red units to attack them, seen in Fig.
Once most of the red attackers are eliminated Fig. Once the Red Bases are destroyed, blue makes no further attempt to target the snipers. Tactics evolved for the Urban Warfare scenario. The worse news is that it seems 3D Systems has only itself to blame for the drop. Shares of several related stocks are ripping higher today, suggesting that investors are feeling especially bullish on the prospects of the EV industry.
AMD stock closed the day at an all-time high as investors cheered the news, which isn't surprising as the new business could significantly boost the chipmaker's growth in the long run. Let's see why the adoption of AMD's server chips by Meta is going to be a big deal.
Back in June, Steve Burns resigned as CEO and from Lordstown's board of directors amid accusations of overstating the pre-order data for the company's Endurance electric pickup truck.
Lordstown and electronics manufacturing giant Foxconn officially released details of a partnership that the EV maker believes will transform it into a long-term player in the sector. All amounts are shown in United States dollars "U. Shares of solar energy stocks jumped almost across the board on Thursday as the industry got some good news about potential tariffs.
Asian solar panel manufacturers led the way, but everyone from residential solar installers to adjacent equipment manufacturers experienced at least a small bounce. The recent spin-off of its managed infrastructure business into a company called Kyndryl NYSE: KD removes a noncore business from its balance sheet. Also, management promised that the two companies would maintain the current combined dividend. Dow Futures 35, Nasdaq Futures 16, Russell Futures 2, Crude Oil Gold 1, Silver Vix CMC Crypto 1, FTSE 7, Nikkei 29, Read full article.
0コメント