There's already a question about How to write good unit tests.
Based on the answers provided there, the key properties of a good unit test -
Keeping those properties in mind, how would one go about automating checks for ensuring only "good" unit tests are merged back to the main codebase?
I am absolutely of the opinion that automating these checks is the way to go, if it can be reasonably achieved. There are so many things a reviewer needs to watch out for when accepting a merge request - clean code, design, architecture, clean tests etc. so reducing the burden by automating checks that used to be manual is always welcome.
Lets sort your properties by ease of automated checking:
Fast - Most IDE's already tell you this
Short - Your line count tells you this (so long as you don't abuse white space)
Repeatable - Rerunning already tells you this
Independent - This could be done by listing what files the test requires, and what they require ...
Tests one thing (SRP) - Count your asserts. Is there more than one?
Readable - Simple, write an AI that writes code and invite it to the code review. Ask what it thinks.
Has a good name - Hope the AI is smarter than humans because even we suck at this.
Your characteristics of unit tests are missing some of important features in my opinion:
The main point of a good test is that it fails when something is wrong and not when nothing is wrong and lets you find out what was wrong so look for:
As already mentioned, a good test fails when the system under test experiences "breaking" changes.
To automatically evaluate new unit tests based on above criteria you could try to implement mutation testing:
You'll probably get lots of false negatives at first. It will probably improve by careful selection of mutation operations that actually lead to failures. As an example, switching adjacent declarations of local variables is probably rather unlikely to yield significant errors.