
Rick Moen wrote:
Quoting James Harper (james.harper@bendigoit.com.au):
Yes that was my reason for blocking too... too much noise in the logs makes log analysis difficult. For the same reason I've changed ports in a lot of cases too - now when I see traffic its probably worth following up.
Personally, I regard that as solving the wrong problem. Instead, I tweak logfile analysis to ignore basically meaningless so-called 'attacks' (net.randoms' doorknob-twisting of the sshd, etc.).
(Noise in your logs? Of course there's noise in your logs. It's the Internet, after all. If the 'flooding' bothers you, don't look at it.)
I use logcheck to ignore routine logs. However it is still annoying, once my attention is drawn to the raw logs, to have to grep out all the SSH noise each time. Further, if I am too heavy-handed / inattentive in my grep -v, I might elide something relevant to whatever made me look at the logs.
Automated iptables blacklists are mostly just a clever way to DoS yourself,
I have SSH on a high port, only open to blacklisted IPs, that allows logins with a particular key, with a forced command that causes the source IP to be whitelisted for an hour. The SSH key in question is distributed to staff. If they manage to lock out their IP, they can unlock it again (assuming they realize what's happened). A bunch of staff, all related, were connecting from home (NATted to a single public IP), and were all using autossh. They were routinely blacklisting themselves when their ADSL cut out and came back, and all their autossh's tried to get in at once. Their IP is permanently whitelisted now (grumble), but whitelisting was mostly working for them before that.
in my experience, and add to system complexity and impair the goal of deterministic behaviour without any benefit worth having. Your Mileage May Differ[tm].
I cannot argue that it makes the system more complex.