4744 stories
·
2 followers

World Cities Ranked by Average Annual Sunshine Hours

1 Share

View the full size version of this infographic

Cities Ranked by Annual Hours of Sunshine

World Cities Ranked by Average Annual Sunshine Hours

View the high resolution of this infographic by clicking here

While we all see the same sky, we see it a bit differently depending on where we stand.

For those in the planet’s most extreme regions, the sun doesn’t follow the same pattern of seasons as it does in more temperate regions.

Today’s visualization comes from Sleepopolis and summarizes the top cities on each continent that receive the most and least annual sunshine hours.

Ranked: Cities with the Least and Most Sunshine Hours

While the graphic groups the top five cities from each continent, the tables below highlight the top 10 cities from around the world that boast the highest and lowest annual sunshine hours.

Top 10 Cities with the Most Annual Sunshine

CityCountryClimate# of Sunshine Hours
YumaUnited StatesArid4,015.3
Marsa AlamEgyptArid3,958.0
Dakhla OasisEgyptArid3,943.4
CalamaChileArid, Marine West Coast, Tundra3,926.2
PhoenixUnited StatesArid3,871.6
KeetmanshoopNamibiaArid3,870.0
Las VegasUnited StatesArid3,825.3
TucsonUnited StatesArid3,806.0
KhargaEgyptArid3,790.8
El PasoUnited StatesSemiarid3,762.5

The sunniest city on Earth is Yuma, Arizona in the U.S. As the driest city in the U.S., Yuma receives less than 200 millimeters (8 inches) of rainfall and endures roughly 100 days of 40°C (104°F) weather every year. Yuma lies between the Gila and Colorado rivers, in a lush region that produces almost 90% of leafy vegetables grown in the U.S.

Arizona boasts three of the top 10 sunniest cities in the world, including Phoenix in the fifth spot, which is the 5th most populous city in the U.S. and is known as “the Valley of the Sun”.

Perhaps unsurprisingly, Egypt also has three cities in the top 10 list, with Marsa Alam, Dakhla Oasis, and Kharga claiming the 2nd, 3rd, and 9th sunniest spots, respectively. Dakhla Oasis, or “inner oasis”, receives practically zero precipitation each year.

Top 10 Cities with the Least Annual Sunshine

CityCountryClimate# of Sunshine Hours
TotoróColombiaMarine West Coast637.0
TórshavnFaroe IslandsMarine West Coast840.0
ChongqingChinaHumid Subtropical954.8
DiksonRussiaTundra1,164.3
MalaboEquatorial GuineaTropical Wet and Dry1,176.7
BuenaventuraColombiaTropical Wet and Dry, Humid Subtropical1,178.0
LimaPeruArid1,230.0
UshuaiaArgentinaTundra1,281.2
ReykjavikIcelandTundra, Marine West Coast1,326.0
BogotáColombiaMarine West Coast1,328.0

Although perceived as a sunny location, Colombia borders both the Caribbean Sea and the Pacific Ocean, exposing it to higher variety in weather patterns and precipitation. Colombia alone is home to three of the top 10 cities with the lowest hours of annual sunshine.

Ranking second-to-last in the number of sunshine hours, Torshavn lies between the Scottish coast and Iceland and receives roughly 37 days of sunshine every year; the average temperatures on this island barely reach above 5°C (41°F).

Our sun doesn’t shine at the same level of brightness all the time. NASA has observed that the sun goes through “solar cycles” that last roughly 11 years─brightening and dimming at relatively regular intervals and impacting how intensely we receive sunlight at any given time.

Sunshine Near the Poles

Humans typically need exposure to the sun to maintain healthy sleep habits, as our brain has been hardwired to follow natural waking and sleeping rhythms.

However, several cities experience no sun at all for several months at a time in what’s known as the “Polar Night”.

  • Tromsø, Norway: winter darkness is enjoyed rather than endured, as it can last for over a month
  • Svalbard, Norway: even indirect sunlight is absent, with no change in sunlight to help indicate a 24-hour day
  • Dikson, Russia: receives no sunlight whatsoever in December

Wherever you live, people have been watching and tracking the movements of the sun with rapt attention for millennia, even when we couldn’t see it.

Subscribe to Visual Capitalist

Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.

The post World Cities Ranked by Average Annual Sunshine Hours appeared first on Visual Capitalist.

Read the whole story
Share this story
Delete

Merger Arb: A Dis-economy of Scale

1 Share
A new paper in the Journal of Economics and Business presents data on merger arb, and which factors, especially sector size and individual fund size, do or do not have an impact on the alpha available in the pursuit of this strategy. For a given time period, the total dollar amountRead More
Read the whole story
Share this story
Delete

Visualizing the Massive Cost of Cybercrime

1 Share

View the full-size version of this infographic.

Visualizing the Massive Cost of Cybercrime

View the high resolution of this infographic by clicking here.

What do Equifax, Yahoo, and the U.S. military have in common? They’ve all fallen victim to a cyberattack at some point in the last decade—and they’re just the tip of the iceberg.

Today’s infographic from Raconteur delves into the average damage caused by cyberattacks at the organizational level, sorted by type of attack, industry, and country.

Rising Cybercrime Costs Across the Board

The infographic focuses on data from the latest Accenture “Cost of Cybercrime” study, which details how cyber threats are evolving in a fast-paced digital landscape.

Overall, the average annual cost to organizations has been ballooning for all types of cyberattacks. For example, a single malware attack in 2018 costed more than $2.6 million, while ransomware costs rose the most between 2017–2018, from $533,000 to $646,000 (a 21% increase).

Both information loss and business disruption occurring from attacks have been found to be the major cost drivers, regardless of the type of attack:

  • Malware
    Major consequence: Information Loss
    Average cost: $1.4M (54% of total losses)
  • Web-based attacks
    Major consequence: Information Loss
    Average cost: $1.4M (61% of total losses)
  • Denial-of-Service (DOS)
    Major consequence: Business Disruption
    Average cost: $1.1M (65% of total losses)
  • Malicious insiders
    Major consequences: Business Disruption and Information Loss
    Average cost: $1.2M ($0.6M each, 75% of total losses)

In 2018, information loss and business disruption combined for over 75% of total business losses from cybercrime.

Cybercrime Casts a Wide Net

No industry is untouched by the growing cost of cybercrime—the report notes that organizations have seen security breaches grow by 67% in the past five years alone. Banking is the most affected, with annual costs crossing $18 million in 2018. This probably comes as no surprise, considering that financial motives are consistently a major incentive for hackers.

Here is the average cost of cyberattacks (per organization) across 15 different industries:

Industry2017 Cost2018 Cost% Change
Banking$16.6M$18.4M+11%
Utilities$15.1M$17.8M+18%
Software$14.5M$16M+11%
Automotive$10.7M$15.8M+47%
Insurance$12.9M$15.8M+22%
High tech$12.9M$14.7M+14%
Capital markets$10.6M$13.9M+32%
Energy$13.2M$13.8M+4%
U.S. Federal$10.4M$13.7M+32%
Consumer goods$8.1M$11.9M+47%
Health$12.9M$11.8M-8%
Retail$9M$11.4M+26%
Life sciences$5.9M$10.9M+86%
Media$7.6M$9.2M+22%
Travel$4.6M$8.2M+77%
Public sector$6.6M$7.9M+20%

Interestingly, the impact on life sciences companies rose the most in a year (up by 86% to $10.9 million per organization), followed by the travel industry (up 77% to $8.2 million per organization). This is likely due to an increase in sensitive and valuable data being shared online, such as clinical trial details or credit card information.

So What Can Companies Do?

Accenture analyzed nine cutting-edge technologies that are helping mitigate cybercrime, and calculated their net savings: the total potential savings minus the required investment in each type of technology or tool.

With almost $2.3 million in net savings, many companies recognize the high payoff that comes with security intelligence. On the other hand, leveraging automation, artificial intelligence, and machine learning can potentially save over $2 million—however, only 38% of businesses have adopted this solution so far.

Cybercrime will remain a large-scale concern for years to come. From 2019–2023E, approximately $5.2 trillion in global value will be at risk from cyberattacks, creating an ongoing challenge for corporations and investors alike.

Subscribe to Visual Capitalist

Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.

The post Visualizing the Massive Cost of Cybercrime appeared first on Visual Capitalist.

Read the whole story
Share this story
Delete

The unexpected benefit of failing at the start of your career

1 Share
A man climbs up stairs in the Triana city quarter of the Andalusian capital of Seville

Who do you think would become more successful: a young scientist who received an important grant early in her career or one who just missed out on receiving that same grant?

This question might seem like “a no-brainer,” says Dashun Wang, an associate professor of management and organizations at the Kellogg School. Many of us assume that success breeds success—and that failure, especially an early career setback, is a sign of more trouble to come.

Then again, those who subscribe to the adage that “what doesn’t kill you makes you stronger” might suspect that the unsuccessful scientists actually benefited from their early setback.

“The idea that one gets stronger through failure is the kind of stiff advice that people may tell themselves in difficult times,” says Kellogg strategy professor Benjamin F. Jones. “But is there any truth to it?”

A new paper from Wang, Jones, and Kellogg postdoctoral researcher Yang Wang finds that the optimists are right: early failure can actually breed later success. Scientists who narrowly missed out on an important grant from the National Institutes of Health (NIH) ended up publishing more successful papers than those who narrowly qualified for the grant. Over the long run, “the losers ended up being better,” Wang says.

The team’s analysis suggests that the act of failing itself may have pushed the frustrated scientists to improve. What didn’t kill them made them stronger.

It’s a hopeful discovery for Wang, who jokes that he considers himself an expert in this area, due to his “extensive experience of failure.” Indeed, he has been turned down for many grant applications himself—which, it turns out, may not be such a liability after all.

Measuring the impact of early career setbacks

The team studied a type of NIH grant called the R01. The team’s data set included all 778,219 of these grant applications submitted to the NIH between 1990 and 2005.

They settled on R01 grants because they’re NIH’s oldest and most common grant type and hugely important to early career researchers in the biomedical sciences. At some universities, receiving one of these grants—worth an average of $1.3 million—can put a young scholar on a sure path toward tenure.

 Scientists in the near-miss group were actually more likely to have “hit” papers in the five years after applying for NIH funding. 

NIH’s evaluation process also made these a good type of grant to study. When a researcher submits a grant application to NIH, it is reviewed by a panel of experts and assigned a numerical score. Then, depending on how much funding is available, NIH determines a cutoff point—say, applications that scored in the top 15th percentile are funded, and the rest are not.

For the authors, this meant it was easy to determine which grants fell just short of receiving funding (they called these “near miss” grants) and which managed to squeak past the cutoff point (they called these “narrow wins”).

Then, they compared the scientists in the near-miss and narrow-win groups. The two sets of scientists were, across a variety of measures, remarkably similar—“identical twins,” Wang says, from a scientific-career perspective. They had been in the field for the same amount of time when they submitted their grant application and had published about the same number of papers, garnering roughly the same share of citations.

In other words, the only meaningful difference in their careers at that point was that the narrow winners received more than $1 million from NIH. “Now the question is, ‘Well, how big of a difference does it make ten years later?’” Wang explains.

Does failure make you stronger?

To figure out just how much of a difference these early successes or setbacks made to a scientific career, the researchers traced the careers of 623 near-miss and 561 narrow-win scientists.

Notably, it turned out that the two groups published at similar rates over the next 10 years—not what you’d expect, given that narrow winners got an early leg up from their NIH grant funding. Even more surprising, scientists in the near-miss group were actually more likely to have “hit” papers (that is, papers that cracked the top-five percent of citations in a particular field and year). In the five years after they applied for NIH funding, 16.1% of papers produced by scientists in the near-miss group were hits, compared to 13.3% for the narrow-win group.

Next, the researchers wanted to pin down exactly why the near-miss group outperformed the narrow-win group in the end. This wasn’t easy to do, given all the complicated factors that influence a scientific career.

The first and most significant hypothesis the team examined was that failing to receive an NIH grant had a “screening effect”—essentially, it acted as a barrier that weeded out weaker scholars from the profession, meaning that, over time, those members of the near-miss group who stuck it out were the strongest scientists.

On the face of it, there appeared to be some merit to this idea: the team observed some attrition within the near miss group in the aftermath of an unsuccessful grant application. They found that failing to receive an R01 grant led to a 12.6% chance of disappearing from the NIH grant system for the next decade, a good indication that they had stopped pursuing a research career altogether.

For a fairer comparison, the team repeated their analysis, removing the narrow-win scientists whose papers most rarely became hits. Specifically, they removed the bottom 12.6% of these narrow winners—the same portion as had left the near-miss group through attrition—so that they were left comparing what they assumed to be the highest performers of each group.

But, the team found, the attrition alone could not explain the success of the near-miss scientists—the near misses still published more hit papers than the narrow winners.

 “Failure is devastating—and it can also fuel people.”—Professor Dashun Wang 

Wang and Jones tested a number of other explanations: maybe, they reasoned, scientists from the near-miss group did better because they sought more influential collaborators, changed institutions, began to study a different topic, or moved into a “hot” area of research.

When they crunched the numbers, they found that there was some evidence that near-miss scientists had begun to study “hot topics,” but this, too, wasn’t enough to explain the overall performance gap.

I get knocked down, but I get up again

With all of these alternative explanations ruled out, the team was left to conclude that failure itself might be the cause of the performance gap between the near-miss and narrow-win groups.

In other words, with no clear external factor that can explain the disappointed scientists’ success, it’s reasonable to think that the experience of adversity made them better in the end—confirming the conventional wisdom that “what doesn’t kill you makes you stronger.”

Jones sees that result as highly encouraging. “The advice to persevere is common,” he says. “But the idea that you take something valuable from the loss—and are better for it—is surprising and inspiring.”

Wang says there is more he wants to know about the power of failure. Is it just limited to the sciences, or will people who face setbacks in other fields succeed too? Is there another explanation for the performance gap that wasn’t testable from the available data? (Maybe, he jokes, everyone from the near-miss group simply decided to get up half an hour earlier each day. “There’s no way for me to know if that’s what happened,” he says.)

To Wang, there is something profound in the idea that failure can, paradoxically, lead to success. It’s a reminder to him, and everyone, not to give up.

“I use this insight a lot these days, because, as I mentioned, I’m kind of a daily failure,” he says. (Editor’s note: Wang’s status as a “daily failure” cannot be confirmed by external sources.) If he struggles at something, he knows there’s a chance he will actually become better at it than “the alternate-universe Dashun” who succeeded—as long as he perseveres.

“Failure is devastating,” he says, “and it can also fuel people.”

This article was previously published in Kellogg Insight. It was republished with permission of the Kellogg School of Management.

Read the whole story
Share this story
Delete

BBRG: How Jim Simons Built the Best Hedge Fund Ever

1 Share

How Jim Simons Built the Best Hedge Fund Ever
The former code breaker and math professor figured out how to do one thing very well in markets.
Bloomberg, October 28, 2019

 

 

 

 

I feel like I have been keeping a secret for a long time that I can finally share: Over the long Labor Day Weekend, I read the galley of Wall Street Journal reporter Greg Zuckerman‘s new book, The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution.

I devoured the book in one sitting, it was a terrific and enjoyable read.

I had been very much looking forward to this book coming out. Jim Simons has been a man of intrigue for like, forever. When I was an incoming Applied Math + Physics major at SUNY Stony Brook,1  he was the outgoing Math department chairman.

For the next few decades, I followed his career. He was an enigma, both mysterious and unknown.

No longer…

Zuckerman has done a masterful job paining a fairly revealing portrait of who Simons is, and how he built Renaissance Technologies into its current state of success.

The numbers are simply unfathomable and eye-popping:

Although rumors of its performance have long circulated on Wall Street, the actual numbers are even more mind-blowing: From 1988 to 2018, Medallion returned 66.1% annually before fees. Net of fees, the gains were 39.1%. Estimated trading profits during those 30 years amounted to $104.5 billion. (About those fees: If the standard hedge fund management fee of 2%, plus 20% of the profits sounds expensive, then what do you think Medallion’s “5 and 44”?)

The rest of the book is similarly filled with little known or previously unknown information. I had to work hard to find negatives to say about the book, the most significant of which is the title.

Overall, it is a great story well told. I strongly recommend it.

 

Go read the complete review here.

 

 

~~~

I originally published this at Bloomberg, October 28, 2019. All of my Bloomberg columns can be found here and here

 

 

The post BBRG: How Jim Simons Built the Best Hedge Fund Ever appeared first on The Big Picture.

Read the whole story
Share this story
Delete

Liquidity might be a better proxy for Size in equity markets

1 Share

The Size Premium in Equity Markets: Where Is the Risk?

  • Stefano Ciliberti, Emmanuel Sérié, Guillaume Simon, Yves Lempérière, and Jean-Philippe Bouchaud
  • Journal of Portfolio Management
  • A version of this paper can be found here
  • Want to read our summaries of academic finance papers? Check out our Academic Research Insight category

The size premium is one of the factors that we have researched and dug into several times on the blog. You can find just a few here, here, and here. This paper though took a fresh look at the size premium and adds a new perspective that we haven’t previously covered.

What are the research questions?

  1. Given various approaches to measuring the “size” of a company, is the total amount of daily traded dollars in a stock (ADV)(1) a better proxy for risk than SMB?
  2. Is CMH (“cold minus hot”) a better long term proxy for returns when compared to SMB?

What are the Academic Insights?

  1. MAYBE. The authors argue the use of market cap as a proxy for the size effect embeds biases in the L/S portfolio constructed to measure the SMB risk premium.  Indeed, the lack of a clear relationship between beta and market cap (see left side of Exhibit 3) produces SMB portfolios with a significant low volatility exposure on the short side. Very small and very large-cap stocks have betas less than 1, while midcaps have betas larger than 1, a nontrivial result. A substitute (ADV—average daily transaction volume) is proposed with a better-behaved relationship with beta (see right side of Exhibit 3).  ADV is conventionally used by practitioners as a measure of liquidity, although little is found in the academic literature regarding its’ use.  For a stock, it represents the difficulty of unwinding a large position with little impact costs.  The idea here is that the ADV measure can be used to determine a set of L/S portfolios (referred to as “cold” and “hot”) whose return would represent compensation for bearing liquidity risk. 
  2. YES. ADV portfolios are less associated with the beta and low volatility biases noted previously and is, therefore, a better substitute for the market cap based construction of the risk factor, SMB. Cold stocks trade at a discount due to the difficulty associated with liquidity and Hot stocks are subject to heavier market scrutiny and therefore exhibit less mispricing. The profitability of the CMH set of portfolios is shown in Exhibit 2, where the t-stat on the slope is significant at 5.1 over not quite 70 years. The empirical argument that a risk premium label be attached to CMH portfolios, is the empirical observation that significant drawdowns are more often observed for small-cap/ADV stocks.  However, the theoretical rationale and other empirical attributes of ADV/CMH require more work.

Why does it matter?

The ADV liquidity measure is not in widespread use in the finance literature, although, two papers (Datar, Naik, and Radcliffe, 1998; Idzorek, Xion, and Ibbotson, 2012) did tie it to long-term returns. Given at least that level of supporting research, the CMH formulation may have appeal to portfolio managers, portfolio construction technologies, and empirical methods used for risk adjustment. This paper matters because it represents interesting exploratory work on the debate over the existence of a size risk premium. I think it’s promising.

The most important chart from the paper


The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained.  Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index.

Abstract

The authors find that when measured in terms of dollar-turnover, and once β and low volatility (low-vol) is neutralized, the size effect is alive and well. With a long-term t-statistic of 5.1, the cold-minus-hot (CMH) anomaly is certainly not less significant than other well-known factors such as value or quality. As compared to market-cap–based SMB, the authors report that CMH portfolios are much less anti-correlated to the low-vol anomaly. In contrast with standard risk premiums, size-based portfolios are found by the authors to be virtually unskewed. In fact, they report that the extreme risk of these portfolios is dominated by the large-cap leg; small caps actually have a positive (rather than negative) skewness. The only argument that the authors find favors a risk premium interpretation at the individual stock level is that the extreme drawdowns are more frequent for small-cap/turnover stocks, even after accounting for volatility. According to the authors, however, this idiosyncratic risk is clearly diversifiable and should not, in theory, generate higher returns.

References   [ + ]

1. Average Daily Volume

Liquidity might be a better proxy for Size in equity markets was originally published at Alpha Architect. Please read the Alpha Architect disclosures at your convenience.

Read the whole story
Share this story
Delete
Next Page of Stories