Hey, Wilbur, is this one for real or a spammer?

This one actually bothers me.

The text of the only post made by that account is lifted directly from Wikipedia. The post almost certainly wasn't made by a human since it includes an ad link. Are the 'bots that infect forums with spam actually learning to match content to forum? For a long time, forum software could filter 'bots by killing messages that were too short. Spam emails could be filtered the same way. Spammers started including blocks of text, sometimes nonsense and sometimes just random quotes of text found in other places, in order to beat the filters.

If someone has figured out how to make a bot that peruses a forum then posts something on-topic (out of place and out of context, yes, but at least in the ballpark of on-topic) along with the spam payload, then filtering just got a whole lot harder.

Yes, that account actually worries me for potential reasons far beyond its impact on this board. I'll do some research and post back later.
 
Bothers me too along with another poster that is clearly not a bot. It's something new altogether and it can't be good.
I've been doing research ever since my last post (I assume I'm the "clearly not a bot" you referenced) and I'm not finding anything. Most of the highly targeted (in the vernacular, "context-aware") spambots flow from social media to email. You put your interests on Facebook, the 'bot reads about you, and sends you spam email that might actually appeal to you. Your interests and your contact info are sourced in the same place.

I've been reading research papers on this stuff all morning. Some go back to 2007. But I haven't found anything that directly addresses what we saw with the post and account that started this.

There's another possibility - that it's not a 'bot. I've managed to find some discussions of manual spamming, something that I would have never conceived but that appears to be a growing problem. Apparently, serious bandwidth is finally reaching places such as parts of Africa that are in such terrible economic straits that it's actually profitable to hire real people to sit at terminals, manually create accounts, and post real posts via those accounts. Of course, they also toss in a spam link.

Some sysops (just gave away my age, didn't I?) are restricting posters so that their initial posts must be approved. Some are allowing posting but no links until a certain threshold number of posts has been achieved.

They're still losing. In the first case, the real person just has to author a couple of good posts before turning the account over to a 'bot, one that now has operator-granted credibility on the forum. In the second case, you just let a 'bot create the account and throw out a few spams until the threshold is reached, then turn the account over to a real person to start spamming OR just let the real person do the spamming from the beginning.

Either way, so far the only effective strategies to fight this stuff seems to be either having crowds of long-time trusted posters with moderation privileges, crowd-sourced moderation (like Slashdot), or tying forum membership into real-world credentials such as only allowing the people who have joined your organization and paid money to have access. Of course, this is on top of the normal strategies like blocking known-bad IPs or ranges.

At the moment, I don't really envy the admins who must deal with this crap.

Good luck.
 
I have seen a couple of possible solutions. Though I dont like them I can see the benefit.
1 Automatic email response. Before allowing active membership a person must respond (usually click a link) in an email sent to the address that is listed in their sign up.
2 Those goofy numbers and or letters in the boxes prior to each post. That would be a pain and admittedly they sometimes cause me problems. It supposedly stops bots but not people.
 
"(I assume I'm the "clearly not a bot" you referenced) "

Not at all! There is an existing member that, on occasion, "parrots" other folks posts by re-wording them. It seems that the member is simply reiterating with emphasis. I would like to think there is a condition at play that we can't guess and I am reluctant to act regarding the screwy but innocuous practice. On the other hand, I've become "gun shy" and overly suspicious of anything unusual - which is detrimental to my judgement. There ought to be a law that forum administrators can only have internet access for two years and then none for the next four.
 
Those goofy numbers and or letters in the boxes prior to each post. That would be a pain and admittedly they sometimes cause me problems. It supposedly stops bots but not people.
Captchas are rapidly falling out of favor. There are any number of sweatshops out there that employ people just to solve them all day long. Even more effectively, some spambot networks are solving the captcha problem by running games online. As the bots run across captchas, they feed those pictures to free porn sites that have little games built in. The games say "type in what you see and get a prize". You type in the answers to the captcha and the game you're playing shows you a porn picture. In the meantime, the answer you provided is fed back to the spambot network which uses it to complete whatever task was being hindered by the captcha.

There are enough ADD-addled porn addicts in the world to overcome all the captchas in the world.

A variant that works better is to ask a simple question instead of asking people to solve a captcha, i.e. instead of requiring someone to type what they see you ask a question like "What country borders the U.S. to the north?" or "What is the sum of 2 and 3?" It's a better test of humanity than asking someone to read and re-type obfuscated text but it's still not perfect. As the templates become standardized, spam networks are being taught to solve the question problems about as quickly as the captchas.

It's a never-ending arms race.
 
Back
Top