Regarding the connection between intelligence and consciousness and whether they *have* to coexist, I think there's an issue with both logic and semantics that muddles the issue quite a bit. And I know, semantics are boring for many people, but here it's impossible to ignore how they influence thinking about consciousness in relation to intelligence.
For starters, there is no clear, general meaning or definition of "intelligence". To quote Wikipedia, "Intelligence has been defined in many different ways including one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving." This gives a pretty good "feel" for what intelligence is, but does not answer where the distinction between intelligence and non-intelligent analysis, algorithms or data manipulation lies.
We can approach intelligence either as purely problem solving, which is less ambiguous but both counterintuitive and controversial, or as the capacity for abstract thought and reasoning similar to that of a human. But the thing is, both are related, but different, and both have their own problems.
If intelligence is purely problem solving, we have to consider *any* data manipulation that leads to a useful result a form of intelligence. A chess program is intelligent in this sense, but so is a simple Python script to automate a workplace task. Heck, an automatically operated door with a light sensor displays a rudimentary form of intelligence if you use this definition.
You can add caveats to this understanding of intelligence, such as the use of memory and the ability to solve problems beyond the scope that the system was originally designed for, but those do not remove the issue completely. A GPU that is used to mine for Bitcoin would be "intelligent" in the sense that it operates beyond the original parametres the device was made for. A human using their wits to escape a predator would not be displaying "intelligence".
Anyhow, we don't think of simple machines, or simple programs, as "intelligent", we usually tend to think of the ability to analyse a situation and solve it, similarly to the way humans do. We consider something to be intelligent if it is like us, but the problem is - this is dangerously close to the "no true Scotsman" fallacy.
When looking for signs of intelligence in, say, animals, we consider certain signs of intelligence to be more telling than others. In general, these are:
1) Communication, especially verbal communication;
2) The ability to manipulate abstract symbols and to associate signs with their meaning (language, writing, art etc.)
3) Problem-solving when a clear goal is presented;
4) Manual dexterity;
5) Empathy.
These are all obviously signs of intelligence in the broad, dry, "problem solving" sense, but we associate certain behaviour with intelligence more than others. An excellent athlete is obviously very good at rapid-fire analysis of information and reaction to them, but we do not conventionally call athletic prowess "intelligence" to the extent we consider being good at games, academic achievement, good social skills to be signs of intelligence.
The problem is, this indicates quite clearly that our perception of intelligence is not based on a verifiable, objective principle. There is no mathematical metric for that. For example, a computer that is amazing at solving a particular task is still "just a machine" even if it does the task 1000 times better than a human. What is perceived as "intelligent" is basically any behaviour that is either human-like, or highly valued in a particular culture.
The problem is, again, that this is both a "no true Scotsman" fallacy and a case of begging the question (in the original sense of assuming the conclusion, not in the everyday use sense of the phrase). We have some vague notions of what intelligence is, but it's not "real" intelligence if it strays too far from the human template. The ability to understand mathematical concepts and apply them? Considered intelligent. The ability of a simple mathematical program to perform very rapid calculations? Not intelligent.
The thing is, the question "is consciousness and general intelligence connected" is pointless if we use that intuitive, human-centric understanding of intelligence. Our general idea of intelligence IS centered around consciousness and worse, what humans perceive as meaningful. With such an assumption, any intelligence (in the general sense of being able to solve problems based on data and memory) is judged not based on pure efficiency and capability, but on how close it mimics a human thought process. This is circular reasoning and *of course* it leads to the conclusion that any kind of intelligence without consciousness is "not really" intelligence.
In other words, if there were a hypothetical species that is able to solve insanely difficult problems, but is not able to meaningfully communicate (due to its evolutionary history or whatever) would be considered unintelligent. On the flipside, any hypothetical species that is extremely good at coordinating their actions and understanding and predicting the behavious of another being, but incapable of understanding complex abstract concepts would also be deemed not very intelligent. And conversely, a species that was, say, very octopus-like and had an extreme ability regarding spatial reasoning and object manipulation might very well think of humans as unintelligent.
Without a good explanation of why a good chess program is not intelligent, the question "is intelligence without consciousness possible" is both pointless and kinda has the answer built in.