Sometimes we categorize tests into groups like "pure unit test", "focused integration test", "end-to-end-test", etc. That's a fine approach, and useful for a lot of cases.
For example, I find that pure unit tests are extremely valuable in giving me feedback about my code design, especially coupling and duplication. I don't even have to run the tests to get that value! Other types of tests have their value, but they don't give me that feedback.
Another categorization I sometimes find useful is based on the FIRST Properties of Unit Tests. You should read that link for the full story, but I'll summarize here:
Fast
Isolated (tests have a single reason to fail. One aspect of behavior = one test)
Repeatable (same result every time)
Self-verifying (tests report an unambiguous pass/fail)
Timely (each test is created just before it is needed)
It's common for programmers to have one set of tests that they run with every edit-build-test cycle on their dev machine. They might have another set they run to validate each checkin before it merges in to source control. Another that runs nightly or weekly. Another that runs before each release.
I've noticed is the decision about which tests fit in each of these buckets is less about "unit" vs. "integration" and more about "FIRS" (without the T). That is, if a test is fast and the results are reliable and useful, programmers will tend to run them more often. If a test is slow, or results require investigation, they will tend to run them less often.
Ideally, I'd like to see 99.9% of tests run in 1ms or less, be perfectly repeatable, with a clear pass/fail, and for each failure to make it obvious what aspect of what desired behavior is not right. You should strive for that. But today, given the tests you have, you may find value in bucketing your tests as I've described.