There are many different terms that have specific meanings in terms of SEO. Be sure you properly understand these to ensure you do not make unnecessary mistakes with your traffic, position and conversions.
1. Robots.txt is often thought to block content from search engines. However, they are actually meant to stop a page or part of a site from being crawled and indexed. Robots.txt will prevent the page content from being indexed, but not the actual URL. If you want a URL to not be indexed, you should add a noindex tag to the header of the page.
2. Google DNS is often confused as well, as many people do not understand that Google can find development sites even with a link to them. Google is a DNS (Domain Name System) and registrar, so it knows about websites as they are created. If you want a site to not be indexed by Google, use log-in controls or noindex/nofollow tags.
3. Many site owners are concerned with Panda and Penguin penalties, which do not actually exist. Manual actions are the only Google penalties. Panda and Penguin were algorithm updates, and any issues you experienced to your site because of them were algorithm shifts, not penalties.
4. Duplicate contentfilter occurs whenever Google finds two copies of the same thing, whether it is exact words on two pages or nearly identical content on separate pages. This is not a penalty, but a filter to prevent poor search results for Google users.
5. PageRank is a public facing value, and will look differently to Google than what you see yourself. PageRank is about the links to your site and the quality of those links, which indicates your website’s strength. It is not an absolute or 1-10 value. It is also going away over time, as Google is refocusing site owners on different or more valuable metrics.