Skip to content

A talk with Outkept: training the weakest link

OutKept is currently a resident of the Home of AI. They are a start-up using sophisticated phishing simulations to train the workforce of companies in dealing with malicious emails. We wanted to know more about this and why their service would be meaningful for businesses. We interviewed the co-founders of OutKept, Simon Bauwens & Dieter Tinel.

Tell us more about the mission of OutKept.


Dieter: Simply said, Outkept prevents phishing at companies. We do this by using sophisticated phishing simulation campaigns. We are unique in how we do this; we use a community of ethical phishers. Unlike competitors, we don’t use templates but have content delivered by our community. We then use the most relevant content for our customers and can quickly scale and even personalise it.

We have developed various technologies to improve this. We try to deliver that content to our customers in the best possible way to offer great simulations.


Simon: Phishing is a growing problem and is getting more accurate by the day. Phishing is, therefore, often a new item on the agenda of many companies. Some companies have already done something about phishing, but that is usually very limited. This mainly concerns a one-off training, or retroactively a phishing mail is sent around the organisation to warn people. Several companies have worked with templates before sending out a simulation mail. Then we get the typical response that either the employees don’t fall for it or that it has been copied too well, making it look like it’s from the internal server and the phishing mail cannot be distinguished from the correct emails.


In most organisations, their phishing policy is therefore substandard or even not present at all. The companies we speak with usually have the question of how they can improve this now, but also importantly, how this can remain budget-manageable. Because let’s be honest, many cybersecurity solutions are expensive, and phishing campaigns are no exception.


Our USP is, therefore, that we can offer this affordably because we work with a closely-knit community and crowdsource this. Our ethical phishers are also paid for their work, but it is much more scalable than other solutions.

Phishing is a growing problem and is getting more accurate by the day.

Simon Bauwens & Dieter Tinel, Co-Founders OutKept

What does a phishing simulation entail?


Dieter: Customers first go through an onboarding process with us to inform them about what we will do precisely and whether specific configurations are needed to make the simulations as successful as possible. We mainly want to offer a training experience in the first stage. A few tests are performed, and then the phishing simulation campaign is activated. The responsible person on the client’s side can track everything via an online dashboard, which means we offer a fully managed solution. There is, therefore, a minimal burden on the client.


Simon: Regarding results, we can immediately hit the ground running; usually, after six months, we see the interaction with phishing emails drops to 50%, which also continues to decline in the following months. We work with a subscription formula that always continues the organisation’s learning experience to reduce the risk of phishing attacks.

Our ethical phishers also create content specifically for target groups. Today, much manual work is still involved in linking content to the right people. We can already automate a part, but there are still a lot of manual checks. Therefore, we are also looking at AI, which we believe can help us to send the right phishing content to the right people. The algorithms must also be able to learn what works for whom and thus always offer the most relevant content.

Dieter: The challenge also lies in the fact that organisations often use different jargon in their communication. We must also include all these different kinds of information in our phishing simulations because otherwise, our simulations would quickly fall short.

The great thing about the Home of AI is that there are many opportunities to talk informally. If you have a problem or question, there are profiles here with a lot of experience willing to guide you in the right direction.

Dieter Tinel, Co-Founder OutKept


Simon: To give a concrete example, you sometimes have five different company functions. Let’s say we want to send a phishing email to someone who sells the company’s product or service. If we look at the job titles, you get results such as “sales”, “representative”, or “accountant manager”, and a whole host of variations. For example, an AI algorithm could create a cluster of sales functions based on all those job titles to send out the most relevant phishing emails to people in a sales-oriented role.


Dieter: That data is often supplied in different languages, and AI can help us deal with all these versions and translations.

Simon: In the past, phishing campaigns were used to see how vulnerable a company was. Then there was a training for an hour with the employees to teach how they could improve this. This is really outdated and also inefficient in terms of time and resources. What we do is use phishing simulations to provide training to the personnel themselves directly while they are at work. We also offer tips & video material, of course, but it is mainly the regular sending of phishing material that makes our campaigns so effective. People will learn something much faster if they regularly interact with it than watching a video or participating in a one-off training session.

We offer our continuous training as a SaaS solution. It is a subscription formula through which the company can continue the phishing simulation until the organisation achieves the desired result. All the simulation metrics can be viewed at all times via a dashboard to follow the statistics and impact.

We are also fully transparent in this regard; if there is no impact, the customer can quickly terminate our collaboration. Fortunately, we did not have to experience this yet.

Our solution now trains the people, but we also see or hear from our clients that many people feel they cannot always pay attention. It is sometimes too intensive to be able to analyse phishing emails yourself all the time. Training people is essential, but prevention support tools are just as necessary. We want to help to recognise the misleading content faster. So that they are both trained but also receive support in recognising that content.


That is why, with the support of VLAIO, we are working on a tool to support people in recognising misleading content with the help of AI. We will offer this alongside our simulation campaigns which are more of a training tool.

Will that tool also encompass a threat intelligence component?


Simon: The tool will not replace classic threat intelligence tools to detect large-scale phishing attacks as often used by senior IT profiles within the organisation. The tool will help the end user recognise the phishing email more. It will be a more advisory tool that alerts users that they may be dealing with a phishing email.


Simon: So our focus is to advise people and train them. The market has little use for yet another tool that intends to remove all harmful content for them. People learn nothing from these, on the contrary. You might sometimes create the annoying situation that emails that were not phishing disappear, which leads to frustration. We still want to give control to the end user.


AI has a lot of potentials to support this in discovering anomalies or novel security threats. Still, AI can only work well if it has sufficient context, and that is where the shoe usually pinches for the average SME that does not have the appropriate or sufficient data, not to mention the GDPR constraints. Our phishing simulations can train the algorithm to help users make better choices. In addition, users can provide feedback to let the system know whether it was indeed a suspicious email.

People will learn something much faster if they regularly interact with it than by watching a video or a one-off training session.

Simon Bauwens, Co-founder OutKept

How is AI evolving within the cybersecurity domain?


Dieter: AI is already widely used in the cybersecurity world. Especially in detecting anomalies, there is a lot of evolution because that is what it is all about. Many situations are recognisable, and it is all about recognising the cases we are not familiar with and also being able to detect them very quickly before the damage is done. The threats are evolving rapidly; we need solutions quickly, and AI can play an essential role in this.

Simon: As Dieter says, the speed of analysis allows the detection of hazards. The second pillar is intervention in these situations. Suppose criminals are attacking your organisation or company. In that case, an AI tool can recognise patterns faster and intervene immediately, while it often takes much longer for people to realise something is wrong. With phishing, the IT team can also take immediate action if someone clicks on a dangerous link that contains malware. We aim to better train man and machine to detect suspicious matters and intervene decisively.

Why did you decide to reside in the Home of AI?

Simon: So, we have started the trajectory for innovative start-up support and have begun developing the AI-based tool. We are now thrilled to be able to spar about this with several experts in the Home of AI. It is not our intention to start developing everything from scratch ourselves. It is interesting to gain new insights from machine learning experts here on how we can tackle certain matters more efficiently.

The intention is to also carefully test the tool with the help of various experts here in the Home of AI. The great thing about this community is that there are many opportunities to talk informally. If you have a problem or question, there are profiles here with a lot of experience willing to guide you in the right direction.


Dieter: Different people in the Home of AI are active around Artificial Intelligence, each in their field. This ensures an excellent cross-pollination of knowledge, which is very valuable. For training models, not every model is equally efficient. By talking to others about it, you will hear what other directions you can take to improve it.


Simon: In addition, there is a hardware challenge that we still need to find solutions for, and ML2Grow can also form a vital collaboration partner. Running an AI solution requires some serious computing power, which they can provide.

Yes, we are ready for the next level!

Unlike competitors, we don’t use templates but have content delivered by our community. We then use the most relevant content for our customers and can quickly scale and even personalise it.

Simon Bauwnens, Co-Founder OutKept