@NightAngel79,
I hope I don't cover anything you already know about the subject, but to summarize duplicate content is kind of a web developer old wives tale. It was and still is a panic button for a lot of site owners due to the huge piles of misinformation being provided about it all over the web, but the reality is that it's hard to trigger and very rarely is except in obvious attempts of search engine spamming. The filter is in place to make sure that you never receive a list of search results that contain nothing but the same page over and over again; this means it checks everything down to the document structure. If it didn't, places such as news aggregate sites would almost cease to exist.
There's no actual penalty, it's just a filter. Every page is still ranked based on the same mystical algorithm using keywords, markup, backlinks and a variety of other criteria to do so. You actually have to work really hard to be filtered, and in the case of a site such as phandroid.com/androidforums.com it would do nothing to hurt the ranking of their main search site. What is more likely is that in a generic search, you would find phandroid.com 9 times out of 10. In a more qualified search that includes a particular subject being discussed in the thread, you'd find androidforums.com. It still wouldn't be entirely improbable to see both depending on the keywords you were looking for.
Matt Cutts has tried to dispel the myth a number of times, but even he gets buried under the sheer number of people who post blogs and tutorials to the contrary.