I understand that obtaining a link from an authoritative website, typically carries greater weighting than a link from a lesser authoritative website.
That said, I am interested to know how much a Source Page's context has on its Link Juice to the Target page.
Let's take the following sites:
www.example.comA middle of the road page about Sport. [Target Page]
www.sample.comAn authority page on Art. [Source Page]
www.domain.comA new page about Sport. [Source Page]
The first URL has a Back link from both the 2nd and 3rd URLs. Admittedly, the 2nd URL is likely to provide greater link juice, due to its authority. That said, would it's link juice be diminished, due to it not being contextually similar? Conversely, would the 3rd URL have its outgoing link juice increased, for the Target Page, due to both pages being contextually similar?
I understand that there are factors, such as Link position etc but lets just assume both source pages are identical in structure and layout. Only differing in content. :-)
PageRank is not exactly what most sites tell you it is. PageRank is based upon the trust network model where one entity trusts another by use of certificates and prior knowledge. If another entity says trust me, without prior knowledge and an appropriate certificate, the new entity cannot be trusted. While this is not immediately recognizable when it comes to links, after all, links between sites do not have prior knowledge or certificates, the rest of the trust model made sense in establishing the link vote model used in PageRank. However, there are problems.
For example, site A trusts site B and creates a link. Then site B rusts and creates a link to site C. Following the trust network model, it is assumed that because of prior knowledge and certificates, site A therefore trusts site C. That would make sense, however, it does not for links. Keep this in mind.
Now consider that rank passed is circular. For example, Bob writes a check to Chuck who writes a check to Fred who writes a check to Bob. Depending upon the values of the checks, who says who gets what? This means that while a vote is passed using links and authority established, it gets confusing rather fast as to who gets what. This is one reason why earlier rank algorithms were recursive.
What is often forgotten is that links too must have value other than one vote and sites with high authority will pass far too much value creating a radical curve in the algorithm. There are two fixes for this.
Authority is capped. High authority sites can only pass so much value through the algorithm as not to pass too much value that it blows up the system.
As well, links are evaluated for quality and value. This means that certain criteria is applied to the link before any value is passed. Each link is evaluated and assigned values between .9 and 0 so that along with the authority cap, only so much of a sites PageRank is passed. This creates a more natural curve in the algorithm so that no super authority skews the whole system. Of this, the value of the link consists of the semantic meaning of the link, the location of the link, the source page value, etc. Placement within the page is very important along with the semantic meaning of the link. For example, a link within the navigation signals that the pages linked to are of the highest value for a site. As well, links in the footer are of less value than a sidebar which is less than the navigational links. Links within content are also of high value and often the best place for outgoing links. For anyone who wants a link to their site, a link within the content is best.
But wait! There's more!! Links higher up within the content are more valuable than links in the middle. But do not think that links at the end of the content have less value than anywhere else. This is because content does not decrease in value in a linear way. The last paragraph is often nearly as important as the first. Who wudda thunk it? As well, consider the semantic value of the content block surrounding the link and how it's evaluation matches the link text and the target page content. Relevancy is through the entire chain is important.
Also consider the number of links on a page. While on the surface this should not be an issue, many links to the same target or many links to too many similar targets or with too many with similar link text also enter the equation. The overall quality of the link itself is a consideration.
Oh and let's not forget citations (quote or mention) with coocurrence (who are you mentioned with?).
All of these things are factors in assessing link values and is a multiplier of the remaining PageRank to be passed.
This means that a page with PR6 and two links does not pass PR3 per link. It just does not work that way. Remember Bob, Chuck, and Fred? As PageRank is passed, it needs to be calculated in a recursive fashion until the results are statistically insignificant. But that is not exactly how things work today or at least all the time. Because PR is based upon the trust network model, a value can be added between links and a proximity calculation much like in routing network packets through least costly routes on the Internet can be used to calculate PR. This required a fairly complete indexing of the web to work. (This is a hint on how confident Google is at their index completeness by the way.) So today a fair approximation of PR can be calculated immediately using a standard calculation taken directly from the trust network model.
Looking at your example, it is not a given that a higher authority page will pass more value for a few reasons: one, authority caps; and two, placement and relevancy of the content block where the link exists and the link and target page, but also the value of the link text itself, and the overall quality of the link. Clear as mud?