Analyzing a Suspected Russian Influence Operation with Synapse
by savage | 2023-04-10
By now, there is a fairly widespread awareness that both state and non-state actors are capable of leveraging social media in coordinated attempts to influence opinions, spread misinformation, and engage in other activity in pursuit of their own objectives. Since Russia’s use of social media to interfere with the 2016 US Presidential elections, multiple other social media influence operations have come to light, including those allegedly seeking to manipulate public discourse for the benefit of China, Russia, Venezuela, and Uganda, among others. In addition to exposing influence operations that take advantage of its platform and suspending the accounts responsible, Twitter previously had made data from the operations available to researchers, although the company has since restricted data sharing to members of the Twitter Moderation Research Consortium. In this blog, we’ll use Synapse to model and analyze an influence operation focusing on the conflict in Syria, which Twitter disclosed in February 2021 and attributed to the Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU).
Modeling the Accounts and Posts so as to ask Questions of the Data
At the time of research, Twitter published and shared data related to identified information operations using the Twitter platform. In our case, this meant that we had access to multiple CSV files, at least one of which contained more than 26,000 rows of data. With the help of Optic’s Ingest Tool, we were able to import the information from the CSV files into Synapse, modeling 49,236 nodes in total. This included:
53
inet:web:acct
nodes20,100
inet:web:post
nodes (which we used to capture mentions of different accounts and groups, as well as replies and reposts)5,987
inet:web:hashtag
nodes
We used the modeled data to answer questions like:
What accounts did Twitter include in this data release?
When were the accounts created and what information did they share (names/IDs, tagline, links, stated location)?
What did the accounts post, and when?
What posts did the accounts reply to?
What posts did the accounts retweet?
What hashtags did the posts use?
What URLs did the posts include?
What other accounts did the posts mention?
A Statistical Overview of the Activity
We were able to use Synapse’s statistics display mode to visualize an overview of account and post activity, including:
A Timeline of the Account Creation Dates
A timeline of the account :signup
dates showing gaps in time between when two of the accounts were created in 2011 and 2013 and when the remaining accounts were created between September 2016 and November 2020. This may suggest the possibility that the first two accounts were existing accounts that the threat actors compromised for the influence operation.
A Breakdown of the Accounts' Listed Locations
The majority of the 24 accounts with locations listed claimed to be based in Russia (12) or Syria (8). One account listed Oslo, Norway, as its location, while another listed the US. Two accounts did not list specific geographic locations.
A Timeline of When the Accounts Posted Tweets
A Bar Chart Showing the Accounts' Posting Activity by Hour of the Day (UTC)
Based on the bar chart, the operators’ activity appears to follow a standard workday schedule in which most activity begins by hours 5-6, pauses presumably for a lunch break, then picks up again in the afternoon before largely ending after hour 18. In a UTC +3 time zone, this would equate to the operators beginning their day between 8-9am, pausing for lunch at 1pm, then resuming work until 9pm at night. The UTC +3 time zone includes Moscow, which is also the location of the headquarters of Russia’s GRU to which researchers at Twitter attributed the influence operation. It's important to note here that this type of analysis, which we refer to as Pattern-of-Life, is not conclusive on its own, but can help support an attribution assessment when combined with other evidence.
Recurring Themes Among the Posts
While the influence operation as a whole focused on the ongoing conflict in Syria, we were able to identify several more specific themes across posts and accounts. These recurring themes, which we tracked with tags, include:
Twitter accounts self-identifying as journalists or media outfits
Praise for Russian forces’ provision of humanitarian aid in Syria
Criticism of US involvement in Syria as solely due to a desire for access to Syrian oil resources
Criticism of the US as aiding, funding, and assisting terrorists across the Middle East
In at least one instance, two different accounts posted the same message in both English and Arabic. We used the Synapse-Argos Power-Up to translate the posts in Arabic and model the original and translated text as a lang:translation
nodes.
Several of the accounts included shortened Twitter URLs in posts, profile taglines, or elsewhere that pointed to other social media accounts and websites, including Telegram, Facebook, and Instagram. We were able to capture these redirects using the inet:urlredir
form, and in some cases, could leverage the additional information to link multiple accounts to a single persona. We were able to tie at least one such persona, which operated under the name "Pamela Spenser", to two Twitter accounts and a Facebook page.
Challenges and Limitations
We encountered some challenges during this project due to limited data availability. Some of the information that we would normally have liked to capture - such as usernames, display names, and account follower and following details - were not included in the original data from Twitter. While we would normally include usernames and display names when modeling inet:web:acct
nodes, Twitter hashed that data, and as a result, we used the hashed values to model the :user
unless we were able to identify the username ourselves based on other details in the account or posts. In the instances in which we had both a hashed value and a username, we used the :alias
property to capture both.
Normally we also would have wanted to capture account follower and following activity to map the influence operation’s network and better assess the likely reach of the operation, however, the data from Twitter only included the number of followers for each suspended account. Were the accounts still active, we could have used the Synapse-Twitter Power-Up to pull in this data and model it using the inet:web:follows
form, however, this is not possible for suspended accounts.
Similarly, although the Synapse data model supports modeling replies, retweets, and quoted tweets, we were unable to capture this information as we lacked access to tweets from accounts that were not suspended for involvement in the influence operation. While Twitter did include some information about replies, retweets, and quoted tweets, this was difficult to model given Twitter’s method of identifying users and tweets by an ID. Twitter then referenced these user and tweet IDs to note whether a tweet had been made:
In response to a particular user ID
A quote of an specific tweet
A retweet of a certain user
Or a retweet of a specific tweet
As our data was limited to the suspended accounts and their posts, we were unable to identify tweet IDs associated with legitimate accounts and therefore could not represent associated retweets, replies, or quote tweets.
Analysis of Influence Operations Reveals Tactics, Techniques, and likely Interests
In this blog, we walked through how we used Synapse to represent and analyze data from an influence operation leveraging Twitter. Optic’s Ingest Tool and the Synapse data model gave us the ability to quickly import, represent, and analyze large amounts of information related to an influence operation focusing on the conflict in Syria. We took advantage of the Synapse-Argos Power-Up to translate and model the translation of posts and account taglines, and also leveraged the statistics display mode to view the data in different statistical projections. Our analysis of the data gave us insight into how the operators conducted their activity, from when they began creating accounts and posting tweets to what times of the day they were most active. We were also able to identify commonalities between how the different accounts represented themselves - for example, most accounts listed a location in either Syria or Russia, and many identified themselves as journalists or other members of the media. Finally, a review of the tweets themselves helped us to identify recurring themes among the posts, which the operators almost certainly sought to communicate on a broader scale through their influence operation.
For more information on Synapse and its various use cases, join our community Slack channel, follow us on Twitter, and checkout our videos on YouTube.