Compare commits

..

28 Commits

Author SHA1 Message Date
ValueRaider
f19dff247e Version 0.2.45 2024-10-20 22:59:16 +01:00
ValueRaider
1de113eccd Merge pull request #2091 from ranaroussi/dev
sync dev -> main
2024-10-20 22:56:26 +01:00
ValueRaider
8daa477167 Merge pull request #2090 from ranaroussi/feature/divident-repair-improve
Feature/dividend repair improve
2024-10-20 22:55:52 +01:00
ValueRaider
3152715d45 Improve divident repair
Dividend repair tweaks:
- handle coincident split missing from dividend
- avoid 100x price changes confusing detection of div too-big/small
- improve detecting div-date-wrong
2024-10-20 21:50:08 +01:00
ValueRaider
ee657b24d5 Merge pull request #2087 from ranaroussi/fix/debug-msg
Fix IndexError in some history() debug messages
2024-10-20 16:31:33 +01:00
ValueRaider
4672715a86 Merge pull request #2066 from ericpien/screener
Implement Screener Feature
2024-10-19 22:13:46 +01:00
Eric Pien
4a70b59db9 Simplify README for Screener
- Reduce the information provided in README with the expectation of migrating to separate documentation.
2024-10-19 09:27:32 -07:00
Ran Aroussi
12e815d977 Update README.md 2024-10-17 21:51:59 +01:00
ValueRaider
e282d1b59c Fix IndexError in some history() debug messages 2024-10-12 11:48:11 +01:00
ValueRaider
7e0946ed37 Merge pull request #2080 from ranaroussi/main
sync main -> dev
2024-10-07 21:30:03 +01:00
ValueRaider
a21fc073b8 Merge pull request #2059 from marco-carvalho/patch-1
Add Pyright type checking
2024-10-07 19:50:24 +01:00
ValueRaider
a9e3cf0780 Merge pull request #2072 from algonell/main
Fix typos
2024-10-05 20:08:10 +01:00
ValueRaider
0f63ecc2bd Merge pull request #2068 from antoniouaa/fix_tickers_single
fix keyerror with single element list passed to Tickers
2024-10-03 20:09:52 +01:00
Andrew Kreimer
10800a1070 Fix typos 2024-10-03 22:01:53 +03:00
Andrew Kreimer
db39b3fca4 Fix a typo 2024-10-03 22:00:57 +03:00
Andrew Kreimer
10d6221718 Fix typos 2024-10-03 22:00:29 +03:00
ValueRaider
17f07e08ef Merge pull request #2069 from ranaroussi/main
sync main -> dev
2024-10-02 20:12:48 +01:00
ValueRaider
f82823c624 Merge pull request #2067 from ranaroussi/fix/tests-context
Fix unit tests contextual imports
2024-10-02 20:11:30 +01:00
antoniouaa
ebc4e200c1 fix keyerror with single element list passed to Tickers 2024-10-01 23:15:54 +01:00
ValueRaider
a8df88b2d2 Fix unit tests contextual imports 2024-10-01 21:24:51 +01:00
Eric Pien
048378ea20 Implement Screener
- Add a new class Screener, Query, and EquityQuery
- Refactor YfData.get() to enable YfData.post() to all of get's implementations
- Add test for the Screener
- Add new set and map to const

Screener can be used to filter yahoo finance. This can be used to get the top gainers of the day, and more customized queries that a user may want to automate.
2024-09-30 18:01:11 -07:00
Marco Carvalho
f1d5c1d06f Update pyright.yml 2024-09-19 16:08:40 -03:00
Marco Carvalho
d65bf2d761 Update requirements.txt 2024-09-19 15:57:55 -03:00
Marco Carvalho
81deec0d6c Update pyrightconfig.json 2024-09-19 11:32:57 -03:00
Marco Carvalho
74f44a4e1e Update pyright.yml 2024-09-19 11:14:09 -03:00
Marco Carvalho
691904202b Create pyright.yml 2024-09-19 11:09:13 -03:00
Marco Carvalho
3fc54ab249 Create pyrightconfig.json 2024-09-19 11:08:56 -03:00
Marco Carvalho
d7c6f5f320 Update requirements.txt 2024-09-19 11:08:08 -03:00
20 changed files with 783 additions and 44 deletions

27
.github/workflows/pyright.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Pyright
on:
pull_request:
branches:
- master
- main
- dev
jobs:
pyright:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v4
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pyright
- name: Run Pyright
run: pyright . --level error

View File

@@ -1,6 +1,19 @@
Change Log
===========
0.2.45
------
Features:
- Screener #2066 @ericpien
Fixes
- Tickers keyerror #2068 @antoniouaa
- IndexError in some history() debug messages #2087
- improve dividend repair #2090
Maintenance
- fix unit tests contextual imports #2067
- fix typos #2072 @algonell
- add Pyright type checking #2059 @marco-carvalho
0.2.44
------
Features:
@@ -359,7 +372,7 @@ Jumping to 0.2 for this big update. 0.1.* will continue to receive bug-fixes
- Fix timezone handling
- Fix handling of missing data
- Clean&format earnings_dates table
- Add ``.get_earnings_dates()`` to retreive earnings calendar
- Add ``.get_earnings_dates()`` to retrieve earnings calendar
- Added ``.get_earnings_history()`` to fetch earnings data
0.1.70
@@ -671,7 +684,7 @@ Jumping to 0.2 for this big update. 0.1.* will continue to receive bug-fixes
- Removed 5 second wait for every failed fetch
- Reduced TTL for Yahoo!'s cookie
- Keeps track of failed downloads and tries to re-download all failed downloads one more time before giving up
- Added progress bar (can be turned off useing ``progress=False``)
- Added progress bar (can be turned off using ``progress=False``)
0.0.7
-------

View File

@@ -28,7 +28,6 @@ Yahoo! finance API is intended for personal use only.**
<a target="new" href="https://pypi.python.org/pypi/yfinance"><img border=0 src="https://img.shields.io/pypi/v/yfinance.svg?maxAge=60%" alt="PyPi version"></a>
<a target="new" href="https://pypi.python.org/pypi/yfinance"><img border=0 src="https://img.shields.io/pypi/status/yfinance.svg?maxAge=60" alt="PyPi status"></a>
<a target="new" href="https://pypi.python.org/pypi/yfinance"><img border=0 src="https://img.shields.io/pypi/dm/yfinance.svg?maxAge=2592000&label=installs&color=%2327B1FF" alt="PyPi downloads"></a>
<a target="new" href="https://travis-ci.com/github/ranaroussi/yfinance"><img border=0 src="https://img.shields.io/travis/ranaroussi/yfinance/main.svg?maxAge=1" alt="Travis-CI build status"></a>
<a target="new" href="https://www.codefactor.io/repository/github/ranaroussi/yfinance"><img border=0 src="https://www.codefactor.io/repository/github/ranaroussi/yfinance/badge" alt="CodeFactor"></a>
<a target="new" href="https://github.com/ranaroussi/yfinance"><img border=0 src="https://img.shields.io/github/stars/ranaroussi/yfinance.svg?style=social&label=Star&maxAge=60" alt="Star this repo"></a>
<a target="new" href="https://twitter.com/aroussi"><img border=0 src="https://img.shields.io/twitter/follow/aroussi.svg?style=social&label=Follow&maxAge=60" alt="Follow me on twitter"></a>
@@ -272,6 +271,17 @@ software_ticker = software.ticker
software_ticker.history()
```
### Market Screener
The `Screener` module allows you to screen the market based on specified queries.
#### Query Construction
To create a query, you can use the `EquityQuery` class to construct your filters step by step. The queries support operators: `GT` (greater than), `LT` (less than), `BTWN` (between), `EQ` (equals), and logical operators `AND` and `OR` for combining multiple conditions.
#### Screener
The `Screener` class is used to execute the queries and return the filtered results. You can set a custom body for the screener or use predefined configurations.
<!-- TODO: link to Github Pages for more including list of predefined bodies, supported fields, operands, and sample code -->
### Logging
`yfinance` now uses the `logging` module to handle messages, default behaviour is only print errors. If debugging, use `yf.enable_debug_mode()` to switch logging to debug with custom formatting.

View File

@@ -1,5 +1,5 @@
{% set name = "yfinance" %}
{% set version = "0.2.44" %}
{% set version = "0.2.45" %}
package:
name: "{{ name|lower }}"

15
pyrightconfig.json Normal file
View File

@@ -0,0 +1,15 @@
{
"typeCheckingMode": "basic",
"reportGeneralTypeIssues": "warning",
"reportArgumentType": "warning",
"reportOptionalMemberAccess": "warning",
"reportOperatorIssue": "warning",
"reportAttributeAccessIssue": "warning",
"reportMissingImports": "warning",
"reportReturnType": "warning",
"reportAssignmentType": "warning",
"reportOptionalSubscript": "warning",
"reportOptionalIterable": "warning",
"reportCallIssue": "warning",
"reportUnhashable": "warning"
}

View File

@@ -1,5 +1,5 @@
from .context import yfinance as yf
from .context import session_gbl
from tests.context import yfinance as yf
from tests.context import session_gbl
import unittest

View File

@@ -1,5 +1,5 @@
from .context import yfinance as yf
from .context import session_gbl
from tests.context import yfinance as yf
from tests.context import session_gbl
import unittest

133
tests/test_screener.py Normal file
View File

@@ -0,0 +1,133 @@
import unittest
from unittest.mock import patch, MagicMock
from yfinance.const import PREDEFINED_SCREENER_BODY_MAP
from yfinance.screener.screener import Screener
from yfinance.screener.screener_query import EquityQuery
class TestScreener(unittest.TestCase):
@classmethod
def setUpClass(self):
self.screener = Screener()
self.query = EquityQuery('gt',['eodprice',3])
def test_set_default_body(self):
self.screener.set_default_body(self.query)
self.assertEqual(self.screener.body['offset'], 0)
self.assertEqual(self.screener.body['size'], 100)
self.assertEqual(self.screener.body['sortField'], 'ticker')
self.assertEqual(self.screener.body['sortType'], 'desc')
self.assertEqual(self.screener.body['quoteType'], 'equity')
self.assertEqual(self.screener.body['query'], self.query.to_dict())
self.assertEqual(self.screener.body['userId'], '')
self.assertEqual(self.screener.body['userIdType'], 'guid')
def test_set_predefined_body(self):
k = 'most_actives'
self.screener.set_predefined_body(k)
self.assertEqual(self.screener.body, PREDEFINED_SCREENER_BODY_MAP[k])
def test_set_predefined_body_invalid_key(self):
with self.assertRaises(ValueError):
self.screener.set_predefined_body('invalid_key')
def test_set_body(self):
body = {
"offset": 0,
"size": 100,
"sortField": "ticker",
"sortType": "desc",
"quoteType": "equity",
"query": self.query.to_dict(),
"userId": "",
"userIdType": "guid"
}
self.screener.set_body(body)
self.assertEqual(self.screener.body, body)
def test_set_body_missing_keys(self):
body = {
"offset": 0,
"size": 100,
"sortField": "ticker",
"sortType": "desc",
"quoteType": "equity"
}
with self.assertRaises(ValueError):
self.screener.set_body(body)
def test_set_body_extra_keys(self):
body = {
"offset": 0,
"size": 100,
"sortField": "ticker",
"sortType": "desc",
"quoteType": "equity",
"query": self.query.to_dict(),
"userId": "",
"userIdType": "guid",
"extraKey": "extraValue"
}
with self.assertRaises(ValueError):
self.screener.set_body(body)
def test_patch_body(self):
initial_body = {
"offset": 0,
"size": 100,
"sortField": "ticker",
"sortType": "desc",
"quoteType": "equity",
"query": self.query.to_dict(),
"userId": "",
"userIdType": "guid"
}
self.screener.set_body(initial_body)
patch_values = {"size": 50}
self.screener.patch_body(patch_values)
self.assertEqual(self.screener.body['size'], 50)
self.assertEqual(self.screener.body['query'], self.query.to_dict())
def test_patch_body_extra_keys(self):
initial_body = {
"offset": 0,
"size": 100,
"sortField": "ticker",
"sortType": "desc",
"quoteType": "equity",
"query": self.query.to_dict(),
"userId": "",
"userIdType": "guid"
}
self.screener.set_body(initial_body)
patch_values = {"extraKey": "extraValue"}
with self.assertRaises(ValueError):
self.screener.patch_body(patch_values)
@patch('yfinance.screener.screener.YfData.post')
def test_fetch(self, mock_post):
mock_response = MagicMock()
mock_response.json.return_value = {'finance': {'result': [{}]}}
mock_post.return_value = mock_response
self.screener.set_default_body(self.query)
response = self.screener._fetch()
self.assertEqual(response, {'finance': {'result': [{}]}})
@patch('yfinance.screener.screener.YfData.post')
def test_fetch_and_parse(self, mock_post):
mock_response = MagicMock()
mock_response.json.return_value = {'finance': {'result': [{'key': 'value'}]}}
mock_post.return_value = mock_response
self.screener.set_default_body(self.query)
self.screener._fetch_and_parse()
self.assertEqual(self.screener.response, {'key': 'value'})
if __name__ == '__main__':
unittest.main()

View File

@@ -10,8 +10,8 @@ Specific test class:
"""
import pandas as pd
from .context import yfinance as yf
from .context import session_gbl
from tests.context import yfinance as yf
from tests.context import session_gbl
from yfinance.exceptions import YFPricesMissingError, YFInvalidPeriodError, YFNotImplementedError, YFTickerMissingError, YFTzMissingError, YFDataException
@@ -848,7 +848,7 @@ class TestTickerAnalysts(unittest.TestCase):
self.assertIsInstance(data, pd.DataFrame, "data has wrong type")
self.assertTrue(data.empty, "data is not empty")
except Exception as e:
self.fail(f"Excpetion raised for attribute '{attribute}': {e}")
self.fail(f"Exception raised for attribute '{attribute}': {e}")

View File

@@ -14,7 +14,7 @@ from unittest import TestSuite
import pandas as pd
# import numpy as np
from .context import yfinance as yf
from tests.context import yfinance as yf
import unittest
# import requests_cache

View File

@@ -27,6 +27,8 @@ from .utils import enable_debug_mode
from .cache import set_tz_cache_location
from .domain.sector import Sector
from .domain.industry import Industry
from .screener.screener import Screener
from .screener.screener_query import EquityQuery
__version__ = version.version
__author__ = "Ran Aroussi"
@@ -34,4 +36,5 @@ __author__ = "Ran Aroussi"
import warnings
warnings.filterwarnings('default', category=DeprecationWarning, module='^yfinance')
__all__ = ['download', 'Ticker', 'Tickers', 'enable_debug_mode', 'set_tz_cache_location', 'Sector', 'Industry']
__all__ = ['download', 'Ticker', 'Tickers', 'enable_debug_mode', 'set_tz_cache_location', 'Sector', 'Industry',
'EquityQuery','Screener']

View File

@@ -304,4 +304,258 @@ SECTOR_INDUSTY_MAPPING = {
'utilities-regulated-gas',
'utilities-independent-power-producers',
'utilities-regulated-water'}
}
EQUITY_SCREENER_EQ_MAP = {
"region": {
"za", "ve", "vn", "us", "tw", "th", "tr", "sr", "sg", "sa", "se", "ru", "ro", "qa", "pt", "pk", "pl",
"ph", "nz", "nl", "mx", "pe", "no", "my", "lv", "lt", "kw", "jp", "is", "il", "lk", "kr", "it", "in",
"ie", "hu", "id", "hk", "gb", "fi", "eg", "dk", "gr", "fr", "es", "ee", "de", "cz", "cl", "ca", "be",
"at", "cn", "br", "au", "ar", "ch"
},
"sector": {
"Basic Materials", "Industrials", "Communication Services", "Healthcare",
"Real Estate", "Technology", "Energy", "Utilities", "Financial Services",
"Consumer Defensive", "Consumer Cyclical"
},
"exchanges": {
"NMS", "NAS", "YHD", "NYQ", "NGM", "NCM", "BSE"
},
"peer_group": {
"US Fund Equity Energy",
"US CE Convertibles",
"EAA CE UK Large-Cap Equity",
"EAA CE Other",
"US Fund Financial",
"India CE Multi-Cap",
"US Fund Foreign Large Blend",
"US Fund Consumer Cyclical",
"EAA Fund Global Equity Income",
"China Fund Sector Equity Financial and Real Estate",
"US Fund Equity Precious Metals",
"EAA Fund RMB Bond - Onshore",
"China Fund QDII Greater China Equity",
"US Fund Large Growth",
"EAA Fund Germany Equity",
"EAA Fund Hong Kong Equity",
"EAA CE UK Small-Cap Equity",
"US Fund Natural Resources",
"US CE Preferred Stock",
"India Fund Sector - Financial Services",
"US Fund Diversified Emerging Mkts",
"EAA Fund South Africa & Namibia Equity",
"China Fund QDII Sector Equity",
"EAA CE Sector Equity Biotechnology",
"EAA Fund Switzerland Equity",
"US Fund Large Value",
"EAA Fund Asia ex-Japan Equity",
"US Fund Health",
"US Fund China Region",
"EAA Fund Emerging Europe ex-Russia Equity",
"EAA Fund Sector Equity Industrial Materials",
"EAA Fund Japan Large-Cap Equity",
"EAA Fund EUR Corporate Bond",
"US Fund Technology",
"EAA CE Global Large-Cap Blend Equity",
"Mexico Fund Mexico Equity",
"US Fund Trading--Leveraged Equity",
"EAA Fund Sector Equity Consumer Goods & Services",
"US Fund Large Blend",
"EAA Fund Global Flex-Cap Equity",
"EAA Fund EUR Aggressive Allocation - Global",
"EAA Fund China Equity",
"EAA Fund Global Large-Cap Growth Equity",
"US CE Options-based",
"EAA Fund Sector Equity Financial Services",
"EAA Fund Europe Large-Cap Blend Equity",
"EAA Fund China Equity - A Shares",
"EAA Fund USD Corporate Bond",
"EAA Fund Eurozone Large-Cap Equity",
"China Fund Aggressive Allocation Fund",
"EAA Fund Sector Equity Technology",
"EAA Fund Global Emerging Markets Equity",
"EAA Fund EUR Moderate Allocation - Global",
"EAA Fund Other Bond",
"EAA Fund Denmark Equity",
"EAA Fund US Large-Cap Blend Equity",
"India Fund Large-Cap",
"Paper & Forestry",
"Containers & Packaging",
"US Fund Miscellaneous Region",
"Energy Services",
"EAA Fund Other Equity",
"Homebuilders",
"Construction Materials",
"China Fund Equity Funds",
"Steel",
"Consumer Durables",
"EAA Fund Global Large-Cap Blend Equity",
"Transportation Infrastructure",
"Precious Metals",
"Building Products",
"Traders & Distributors",
"Electrical Equipment",
"Auto Components",
"Construction & Engineering",
"Aerospace & Defense",
"Refiners & Pipelines",
"Diversified Metals",
"Textiles & Apparel",
"Industrial Conglomerates",
"Household Products",
"Commercial Services",
"Food Retailers",
"Semiconductors",
"Media",
"Automobiles",
"Consumer Services",
"Technology Hardware",
"Transportation",
"Telecommunication Services",
"Oil & Gas Producers",
"Machinery",
"Retailing",
"Healthcare",
"Chemicals",
"Food Products",
"Diversified Financials",
"Real Estate",
"Insurance",
"Utilities",
"Pharmaceuticals",
"Software & Services",
"Banks"
}
}
EQUITY_SCREENER_FIELDS = {
# EQ Fields
"region",
"sector",
"peer_group",
"exchanges",
# price
"eodprice",
"intradaypricechange",
"lastclosemarketcap.lasttwelvemonths",
"percentchange",
"lastclose52weekhigh.lasttwelvemonths",
"fiftytwowkpercentchange",
"intradayprice",
"lastclose52weeklow.lasttwelvemonths",
"intradaymarketcap",
# trading
"beta",
"avgdailyvol3m",
"pctheldinsider",
"pctheldinst",
"dayvolume",
"eodvolume",
# short interest
"short_percentage_of_shares_outstanding.value",
"short_interest.value",
"short_percentage_of_float.value",
"days_to_cover_short.value",
"short_interest_percentage_change.value",
# valuation
"bookvalueshare.lasttwelvemonths",
"lastclosemarketcaptotalrevenue.lasttwelvemonths",
"lastclosetevtotalrevenue.lasttwelvemonths",
"pricebookratio.quarterly",
"peratio.lasttwelvemonths",
"lastclosepricetangiblebookvalue.lasttwelvemonths",
"lastclosepriceearnings.lasttwelvemonths",
"pegratio_5y",
# profitability
"consecutive_years_of_dividend_growth_count",
"returnonassets.lasttwelvemonths",
"returnonequity.lasttwelvemonths",
"forward_dividend_per_share",
"forward_dividend_yield",
"returnontotalcapital.lasttwelvemonths",
# leverage
"lastclosetevebit.lasttwelvemonths",
"netdebtebitda.lasttwelvemonths",
"totaldebtequity.lasttwelvemonths",
"ltdebtequity.lasttwelvemonths",
"ebitinterestexpense.lasttwelvemonths",
"ebitdainterestexpense.lasttwelvemonths",
"lastclosetevebitda.lasttwelvemonths",
"totaldebtebitda.lasttwelvemonths",
# liquidity
"quickratio.lasttwelvemonths",
"altmanzscoreusingtheaveragestockinformationforaperiod.lasttwelvemonths",
"currentratio.lasttwelvemonths",
"operatingcashflowtocurrentliabilities.lasttwelvemonths",
# income statement
"totalrevenues.lasttwelvemonths",
"netincomemargin.lasttwelvemonths",
"grossprofit.lasttwelvemonths",
"ebitda1yrgrowth.lasttwelvemonths",
"dilutedepscontinuingoperations.lasttwelvemonths",
"quarterlyrevenuegrowth.quarterly",
"epsgrowth.lasttwelvemonths",
"netincomeis.lasttwelvemonths",
"ebitda.lasttwelvemonths",
"dilutedeps1yrgrowth.lasttwelvemonths",
"totalrevenues1yrgrowth.lasttwelvemonths",
"operatingincome.lasttwelvemonths",
"netincome1yrgrowth.lasttwelvemonths",
"grossprofitmargin.lasttwelvemonths",
"ebitdamargin.lasttwelvemonths",
"ebit.lasttwelvemonths",
"basicepscontinuingoperations.lasttwelvemonths",
"netepsbasic.lasttwelvemonths"
"netepsdiluted.lasttwelvemonths",
# balance sheet
"totalassets.lasttwelvemonths",
"totalcommonsharesoutstanding.lasttwelvemonths",
"totaldebt.lasttwelvemonths",
"totalequity.lasttwelvemonths",
"totalcurrentassets.lasttwelvemonths",
"totalcashandshortterminvestments.lasttwelvemonths",
"totalcommonequity.lasttwelvemonths",
"totalcurrentliabilities.lasttwelvemonths",
"totalsharesoutstanding",
# cash flow
"forward_dividend_yield",
"leveredfreecashflow.lasttwelvemonths",
"capitalexpenditure.lasttwelvemonths",
"cashfromoperations.lasttwelvemonths",
"leveredfreecashflow1yrgrowth.lasttwelvemonths",
"unleveredfreecashflow.lasttwelvemonths",
"cashfromoperations1yrgrowth.lasttwelvemonths",
# ESG
"esg_score",
"environmental_score",
"governance_score",
"social_score",
"highest_controversy"
}
PREDEFINED_SCREENER_BODY_MAP = {
'aggressive_small_caps': {"offset":0,"size":25,"sortField":"eodvolume","sortType":"desc","quoteType":"equity","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NMS"]},{"operator":"eq","operands":["exchange","NYQ"]}]},{"operator":"or","operands":[{"operator":"LT","operands":["epsgrowth.lasttwelvemonths",15]}]}]},"userId":"","userIdType":"guid"},
'day_gainers': {"offset":0,"size":25,"sortField":"percentchange","sortType":"DESC","quoteType":"EQUITY","query":{"operator":"AND","operands":[{"operator":"gt","operands":["percentchange",3]},{"operator":"eq","operands":["region","us"]},{"operator":"or","operands":[{"operator":"BTWN","operands":["intradaymarketcap",2000000000,10000000000]},{"operator":"BTWN","operands":["intradaymarketcap",10000000000,100000000000]},{"operator":"GT","operands":["intradaymarketcap",100000000000]}]},{"operator":"gte","operands":["intradayprice",5]},{"operator":"gt","operands":["dayvolume",15000]}]},"userId":"","userIdType":"guid"},
'day_losers': {"offset":0,"size":25,"sortField":"percentchange","sortType":"ASC","quoteType":"EQUITY","query":{"operator":"AND","operands":[{"operator":"lt","operands":["percentchange",-2.5]},{"operator":"eq","operands":["region","us"]},{"operator":"or","operands":[{"operator":"BTWN","operands":["intradaymarketcap",2000000000,10000000000]},{"operator":"BTWN","operands":["intradaymarketcap",10000000000,100000000000]},{"operator":"GT","operands":["intradaymarketcap",100000000000]}]},{"operator":"gte","operands":["intradayprice",5]},{"operator":"gt","operands":["dayvolume",20000]}]},"userId":"","userIdType":"guid"},
'growth_technology_stocks': {"offset":0,"size":25,"sortField":"eodvolume","sortType":"desc","quoteType":"equity","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"BTWN","operands":["quarterlyrevenuegrowth.quarterly",50,100]},{"operator":"GT","operands":["quarterlyrevenuegrowth.quarterly",100]},{"operator":"BTWN","operands":["quarterlyrevenuegrowth.quarterly",25,50]}]},{"operator":"or","operands":[{"operator":"BTWN","operands":["epsgrowth.lasttwelvemonths",25,50]},{"operator":"BTWN","operands":["epsgrowth.lasttwelvemonths",50,100]},{"operator":"GT","operands":["epsgrowth.lasttwelvemonths",100]}]},{"operator":"eq","operands":["sector","Technology"]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NMS"]},{"operator":"eq","operands":["exchange","NYQ"]}]}]},"userId":"","userIdType":"guid"},
'most_actives': {"offset":0,"size":25,"sortField":"dayvolume","sortType":"DESC","quoteType":"EQUITY","query":{"operator":"AND","operands":[{"operator":"eq","operands":["region","us"]},{"operator":"or","operands":[{"operator":"BTWN","operands":["intradaymarketcap",10000000000,100000000000]},{"operator":"GT","operands":["intradaymarketcap",100000000000]},{"operator":"BTWN","operands":["intradaymarketcap",2000000000,10000000000]}]},{"operator":"gt","operands":["dayvolume",5000000]}]},"userId":"","userIdType":"guid"},
'most_shorted_stocks': {"size":25,"offset":0,"sortField":"short_percentage_of_shares_outstanding.value","sortType":"DESC","quoteType":"EQUITY","topOperator":"AND","query":{"operator":"AND","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["region","us"]}]},{"operator":"gt","operands":["intradayprice",1]},{"operator":"gt","operands":["avgdailyvol3m",200000]}]},"userId":"","userIdType":"guid"},
'small_cap_gainers': {"offset":0,"size":25,"sortField":"eodvolume","sortType":"desc","quoteType":"equity","query":{"operator":"and","operands":[{"operator":"lt","operands":["intradaymarketcap",2000000000]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NMS"]},{"operator":"eq","operands":["exchange","NYQ"]}]}]},"userId":"","userIdType":"guid"},
'undervalued_growth_stocks': {"offset":0,"size":25,"sortType":"DESC","sortField":"eodvolume","quoteType":"EQUITY","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"BTWN","operands":["peratio.lasttwelvemonths",0,20]}]},{"operator":"or","operands":[{"operator":"LT","operands":["pegratio_5y",1]}]},{"operator":"or","operands":[{"operator":"BTWN","operands":["epsgrowth.lasttwelvemonths",25,50]},{"operator":"BTWN","operands":["epsgrowth.lasttwelvemonths",50,100]},{"operator":"GT","operands":["epsgrowth.lasttwelvemonths",100]}]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NMS"]},{"operator":"eq","operands":["exchange","NYQ"]}]}]},"userId":"","userIdType":"guid"},
'undervalued_large_caps': {"offset":0,"size":25,"sortField":"eodvolume","sortType":"desc","quoteType":"equity","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"BTWN","operands":["peratio.lasttwelvemonths",0,20]}]},{"operator":"lt","operands":["pegratio_5y",1]},{"operator":"btwn","operands":["intradaymarketcap",10000000000,100000000000]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NMS"]},{"operator":"eq","operands":["exchange","NYQ"]}]}]},"userId":"","userIdType":"guid"},
'conservative_foreign_funds': {"offset":0,"size":25,"sortType":"DESC","sortField":"fundnetassets","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["categoryname","Foreign Large Value"]},{"operator":"EQ","operands":["categoryname","Foreign Large Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Large Growth"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Growth"]},{"operator":"EQ","operands":["categoryname","Foreign Large Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Value"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Value"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Value"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Blend"]},{"operator":"EQ","operands":["categoryname","Foreign Small/Mid Value"]}]},{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",4]},{"operator":"EQ","operands":["performanceratingoverall",5]}]},{"operator":"lt","operands":["initialinvestment",100001]},{"operator":"lt","operands":["annualreturnnavy1categoryrank",50]},{"operator":"or","operands":[{"operator":"EQ","operands":["riskratingoverall",1]},{"operator":"EQ","operands":["riskratingoverall",3]},{"operator":"EQ","operands":["riskratingoverall",2]}]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"},
'high_yield_bond': {"offset":0,"size":25,"sortType":"DESC","sortField":"fundnetassets","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",4]},{"operator":"EQ","operands":["performanceratingoverall",5]}]},{"operator":"lt","operands":["initialinvestment",100001]},{"operator":"lt","operands":["annualreturnnavy1categoryrank",50]},{"operator":"or","operands":[{"operator":"EQ","operands":["riskratingoverall",1]},{"operator":"EQ","operands":["riskratingoverall",3]},{"operator":"EQ","operands":["riskratingoverall",2]}]},{"operator":"or","operands":[{"operator":"EQ","operands":["categoryname","High Yield Bond"]}]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"},
'portfolio_anchors': {"offset":0,"size":25,"sortType":"DESC","sortField":"fundnetassets","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["categoryname","Large Blend"]}]},{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",4]},{"operator":"EQ","operands":["performanceratingoverall",5]}]},{"operator":"lt","operands":["initialinvestment",100001]},{"operator":"lt","operands":["annualreturnnavy1categoryrank",50]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"},
'solid_large_growth_funds': {"offset":0,"size":25,"sortType":"DESC","sortField":"fundnetassets","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["categoryname","Large Growth"]}]},{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",5]},{"operator":"EQ","operands":["performanceratingoverall",4]}]},{"operator":"lt","operands":["initialinvestment",100001]},{"operator":"lt","operands":["annualreturnnavy1categoryrank",50]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"},
'solid_midcap_growth_funds': {"offset":0,"size":25,"sortType":"DESC","sortField":"fundnetassets","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["categoryname","Mid-Cap Growth"]}]},{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",5]},{"operator":"EQ","operands":["performanceratingoverall",4]}]},{"operator":"lt","operands":["initialinvestment",100001]},{"operator":"lt","operands":["annualreturnnavy1categoryrank",50]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"},
'top_mutual_funds': {"offset":0,"size":25,"sortType":"DESC","sortField":"percentchange","quoteType":"MUTUALFUND","query":{"operator":"and","operands":[{"operator":"gt","operands":["intradayprice",15]},{"operator":"or","operands":[{"operator":"EQ","operands":["performanceratingoverall",5]},{"operator":"EQ","operands":["performanceratingoverall",4]}]},{"operator":"gt","operands":["initialinvestment",1000]},{"operator":"or","operands":[{"operator":"eq","operands":["exchange","NAS"]}]}]},"userId":"","userIdType":"guid"}
}

View File

@@ -330,6 +330,14 @@ class YfData(metaclass=SingletonMeta):
@utils.log_indent_decorator
def get(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):
return self._make_request(url, request_method = self._session.get, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout)
@utils.log_indent_decorator
def post(self, url, body, user_agent_headers=None, params=None, proxy=None, timeout=30):
return self._make_request(url, request_method = self._session.post, user_agent_headers=user_agent_headers, body=body, params=params, proxy=proxy, timeout=timeout)
@utils.log_indent_decorator
def _make_request(self, url, request_method, user_agent_headers=None, body=None, params=None, proxy=None, timeout=30):
# Important: treat input arguments as immutable.
if len(url) > 200:
@@ -363,7 +371,11 @@ class YfData(metaclass=SingletonMeta):
'timeout': timeout,
'headers': user_agent_headers or self.user_agent_headers
}
response = self._session.get(**request_args)
if body:
request_args['json'] = body
response = request_method(**request_args)
utils.get_yf_logger().debug(f'response code={response.status_code}')
if response.status_code >= 400:
# Retry with other cookie strategy
@@ -375,7 +387,7 @@ class YfData(metaclass=SingletonMeta):
request_args['params']['crumb'] = crumb
if strategy == 'basic':
request_args['cookies'] = {cookie.name: cookie.value}
response = self._session.get(**request_args)
response = request_method(**request_args)
utils.get_yf_logger().debug(f'response code={response.status_code}')
return response
@@ -397,4 +409,4 @@ class YfData(metaclass=SingletonMeta):
utils.get_yf_logger().debug(f'get_raw_json(): {url}')
response = self.get(url, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout)
response.raise_for_status()
return response.json()
return response.json()

View File

@@ -200,10 +200,6 @@ def download(tickers, start=None, end=None, actions=False, threads=True, ignore_
if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0] > 0):
shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)
if len(tickers) == 1:
ticker = tickers[0]
return shared._DFS[ticker]
try:
data = _pd.concat(shared._DFS.values(), axis=1, sort=True,
keys=shared._DFS.keys(), names=['Ticker', 'Price'])

View File

@@ -14,7 +14,7 @@ Supports ETF and Mutual Funds Data
Queried Modules: quoteType, summaryProfile, fundProfile, topHoldings
Notes:
- fundPerformance module is not implemented as better data is queriable using history
- fundPerformance module is not implemented as better data is queryable using history
'''
class FundsData:
def __init__(self, data: YfData, symbol: str, proxy=None):

View File

@@ -3,6 +3,7 @@ import dateutil as _dateutil
import logging
import numpy as np
import pandas as pd
from math import isclose
import time as _time
import bisect
@@ -145,7 +146,7 @@ class PriceHistory:
params["interval"] = interval.lower()
params["includePrePost"] = prepost
# 1) fix weired bug with Yahoo! - returning 60m for 30m bars
# 1) fix weird bug with Yahoo! - returning 60m for 30m bars
if params["interval"] == "30m":
params["interval"] = "15m"
@@ -253,9 +254,15 @@ class PriceHistory:
endDt = pd.to_datetime(end, unit='s')
if quotes.index[quotes.shape[0] - 1] >= endDt:
quotes = quotes.iloc[0:quotes.shape[0] - 1]
logger.debug(f'{self.ticker}: yfinance received OHLC data: {quotes.index[0]} -> {quotes.index[-1]}')
if quotes.empty:
msg = f'{self.ticker}: yfinance received OHLC data: EMPTY'
elif len(quotes) == 1:
msg = f'{self.ticker}: yfinance received OHLC data: {quotes.index[0]} only'
else:
msg = f'{self.ticker}: yfinance received OHLC data: {quotes.index[0]} -> {quotes.index[-1]}'
logger.debug(msg)
# 2) fix weired bug with Yahoo! - returning 60m for 30m bars
# 2) fix weird bug with Yahoo! - returning 60m for 30m bars
if interval.lower() == "30m":
logger.debug(f'{self.ticker}: resampling 30m OHLC from 15m')
quotes2 = quotes.resample('30min')
@@ -290,7 +297,13 @@ class PriceHistory:
self._history_metadata_formatted = True
tps = self._history_metadata["tradingPeriods"]
quotes = utils.fix_Yahoo_returning_prepost_unrequested(quotes, params["interval"], tps)
logger.debug(f'{self.ticker}: OHLC after cleaning: {quotes.index[0]} -> {quotes.index[-1]}')
if quotes.empty:
msg = f'{self.ticker}: OHLC after cleaning: EMPTY'
elif len(quotes) == 1:
msg = f'{self.ticker}: OHLC after cleaning: {quotes.index[0]} only'
else:
msg = f'{self.ticker}: OHLC after cleaning: {quotes.index[0]} -> {quotes.index[-1]}'
logger.debug(msg)
# actions
dividends, splits, capital_gains = utils.parse_actions(data["chart"]["result"][0])
@@ -353,7 +366,13 @@ class PriceHistory:
df.loc[df["Capital Gains"].isna(), "Capital Gains"] = 0
else:
df["Capital Gains"] = 0.0
logger.debug(f'{self.ticker}: OHLC after combining events: {quotes.index[0]} -> {quotes.index[-1]}')
if df.empty:
msg = f'{self.ticker}: OHLC after combining events: EMPTY'
elif len(df) == 1:
msg = f'{self.ticker}: OHLC after combining events: {df.index[0]} only'
else:
msg = f'{self.ticker}: OHLC after combining events: {df.index[0]} -> {df.index[-1]}'
logger.debug(msg)
df = utils.fix_Yahoo_returning_live_separate(df, params["interval"], tz_exchange, repair=repair, currency=currency)
@@ -425,7 +444,13 @@ class PriceHistory:
if interval != interval_user:
df = self._resample(df, interval, interval_user, period_user)
logger.debug(f'{self.ticker}: yfinance returning OHLC: {df.index[0]} -> {df.index[-1]}')
if df.empty:
msg = f'{self.ticker}: yfinance returning OHLC: EMPTY'
elif len(df) == 1:
msg = f'{self.ticker}: yfinance returning OHLC: {df.index[0]} only'
else:
msg = f'{self.ticker}: yfinance returning OHLC: {df.index[0]} -> {df.index[-1]}'
logger.debug(msg)
if self._reconstruct_start_interval is not None and self._reconstruct_start_interval == interval:
self._reconstruct_start_interval = None
@@ -682,7 +707,7 @@ class PriceHistory:
if min_dt is not None:
fetch_start = max(min_dt.date(), fetch_start)
logger.debug(f"Fetching {sub_interval} prepost={prepost} {fetch_start}->{fetch_end}", extra=log_extras)
# Temp disable erors printing
# Temp disable errors printing
logger = utils.get_yf_logger()
if hasattr(logger, 'level'):
# YF's custom indented logger doesn't expose level
@@ -1352,9 +1377,26 @@ class PriceHistory:
# div_too_big_improvement_threshold = 1
div_too_big_improvement_threshold = 2
drop_c2l = df2['Close'].iloc[div_idx-1] - df2['Low'].iloc[div_idx]
# drop_c2c = df2['Close'].iloc[div_idx-1] - df2['Close'].iloc[div_idx]
# drop = drop_c2c
if isclose(df2['Low'].iloc[div_idx], df2['Close'].iloc[div_idx-1]*100, rel_tol = 0.025):
# Price has jumped ~100x on ex-div day, need to fix immediately.
drop_c2l = df2['Close'].iloc[div_idx-1]*100 - df2['Low'].iloc[div_idx]
div_pct = div / (df2['Close'].iloc[div_idx-1]*100)
true_adjust = 1.0 - div / (df2['Close'].iloc[div_idx-1]*100)
present_adj = df2['Adj Close'].iloc[div_idx-1] / df2['Close'].iloc[div_idx-1]
if not isclose(present_adj, true_adjust, rel_tol = 0.025):
df2.loc[:dt-_datetime.timedelta(seconds=1), 'Adj Close'] = true_adjust * df2['Close'].loc[:dt-_datetime.timedelta(seconds=1)]
df2.loc[:dt-_datetime.timedelta(seconds=1), 'Repaired?'] = True
elif isclose(df2['Low'].iloc[div_idx], df2['Close'].iloc[div_idx-1]*0.01, rel_tol = 0.025):
# Price has dropped ~100x on ex-div day, need to fix immediately.
drop_c2l = df2['Close'].iloc[div_idx-1]*0.01 - df2['Low'].iloc[div_idx]
div_pct = div / (df2['Close'].iloc[div_idx-1]*0.01)
true_adjust = 1.0 - div / (df2['Close'].iloc[div_idx-1]*100)
present_adj = df2['Adj Close'].iloc[div_idx-1] / df2['Close'].iloc[div_idx-1]
if not isclose(present_adj, true_adjust, rel_tol = 0.025):
df2.loc[:dt-_datetime.timedelta(seconds=1), 'Adj Close'] = true_adjust * df2['Close'].loc[:dt-_datetime.timedelta(seconds=1)]
df2.loc[:dt-_datetime.timedelta(seconds=1), 'Repaired?'] = True
else:
drop_c2l = df2['Close'].iloc[div_idx-1] - df2['Low'].iloc[div_idx]
drop = drop_c2l
if div_idx < len(df2)-1:
# # In low-volume scenarios, the price drop is day after not today.
@@ -1364,8 +1406,10 @@ class PriceHistory:
# elif df2['Volume'].iloc[div_idx]==0:
# if drop == 0.0:
# drop = np.max(df2['Close'].iloc[div_idx-1:div_idx+1].to_numpy() - df2['Low'].iloc[div_idx:div_idx+2].to_numpy())
#
# Hmm, can I always look ahead 1 day? Catch: increases FP rate of div-too-small for tiny divs.
drops = df2['Close'].iloc[div_idx-1:div_idx+1].to_numpy() - df2['Low'].iloc[div_idx:div_idx+2].to_numpy()
# drops = df2['Close'].iloc[div_idx-1:div_idx+1].to_numpy() - df2['Low'].iloc[div_idx:div_idx+2].to_numpy()
drops = np.array([drop, df2['Close'].iloc[div_idx] - df2['Low'].iloc[div_idx+1]])
drop_2Dmax = np.max(drops)
else:
drops = np.array([drop])
@@ -1578,7 +1622,7 @@ class PriceHistory:
divergence = min(abs(ratio1-1.0), abs(ratio2-1.0))
if abs(div_dt-prev_div.name) <= phantom_proximity_threshold and not prev_div['phantom'] and divergence < 0.01:
if prev_div.name in dts_to_check:
# Both this and previous are anomolous, so mark smallest drop as phantom
# Both this and previous are anomalous, so mark smallest drop as phantom
drop = div['drop']
drop_prev = prev_div['drop']
if drop > 1.5*drop_prev:
@@ -1594,7 +1638,7 @@ class PriceHistory:
divergence = min(abs(ratio1-1.0), abs(ratio2-1.0))
if abs(div_dt-next_div.name) <= phantom_proximity_threshold and divergence < 0.01:
if next_div.name in dts_to_check:
# Both this and previous are anomolous, so mark smallest drop as phantom
# Both this and previous are anomalous, so mark smallest drop as phantom
drop = div['drop']
drop_next = next_div['drop']
if drop > 1.5*drop_next:
@@ -1704,21 +1748,27 @@ class PriceHistory:
adjDeltas = x['Adj Low'].iloc[1:].to_numpy() - x['Adj Close'].iloc[:-1].to_numpy()
adjDeltas = np.append([0.0], adjDeltas)
x['adjDelta'] = adjDeltas
for i in np.where(x['Dividends']>0)[0]:
x.loc[x.index[i], 'adjDelta'] += x['Dividends'].iloc[i]*x['Adj'].iloc[i]
deltas = x[['delta', 'adjDelta']]
if div_pct > 0.15 and div_pct < 1.0: # avoid analysing impossibly-big dividends here
if div_pct > 0.05 and div_pct < 1.0:
adjDiv = div * x['Adj'].iloc[0]
f = deltas['adjDelta'] > (adjDiv*0.6)
if f.any():
for idx in np.where(f)[0]:
adjDelta_max_drop_idx = idx
adjDelta_max_drop = deltas['adjDelta'].iloc[idx]
if adjDelta_max_drop > 1.001*deltas['delta'].iloc[adjDelta_max_drop_idx]:
indices = np.where(f)[0]
for idx in indices:
adjDelta_drop = deltas['adjDelta'].iloc[idx]
if adjDelta_drop > 1.001*deltas['delta'].iloc[idx]:
# Adjusted price has risen by more than unadjusted, should not happen.
# See if Adjusted price later falls by a similar amount. This would mean
# dividend has been applied too early.
ratios = (-1*deltas['adjDelta'])/adjDelta_max_drop
ratios = (-1*deltas['adjDelta'])/adjDelta_drop
f_near1_or_above = ratios>=0.8
if f_near1_or_above.any():
# Update: only check for wrong date if no coincident split.
# Because if a split, more likely the div is missing split
split = df2['Stock Splits'].loc[dt]
pre_split = div_status_df['div_pre_split'].loc[dt]
if (split==0.0 or (not pre_split)) and f_near1_or_above.any():
near_indices = np.where(f_near1_or_above)[0]
if len(near_indices) > 1:
penalties = np.zeros(len(near_indices))
@@ -1736,7 +1786,7 @@ class PriceHistory:
div_date_wrong = True
div_true_date = ratios.index[reversal_idx]
break
elif adjDelta_max_drop > 0.39*adjDiv:
elif adjDelta_drop > 0.39*adjDiv:
# Still true that applied adjustment exceeds price action,
# just not clear what solution is (if any).
div_adj_exceeds_prices = True
@@ -1753,6 +1803,27 @@ class PriceHistory:
div_status['div_date_wrong'] = div_date_wrong
div_status['div_true_date'] = div_true_date
if div_adj_exceeds_prices:
split = df2['Stock Splits'].loc[dt]
if split != 0.0:
# Check again if div missing split. Use looser tolerance
# as we know the adjustment seems wrong.
div_postSplit = div / split
if div_postSplit > div:
# Use volatility-adjusted drop
typical_volatility = div_status_df['vol'].loc[dt]
drop = div_status_df['drop'].loc[dt]
_drop = drop - typical_volatility
else:
drop_2Dmax = div_status_df['drop_2Dmax'].loc[dt]
_drop = drop_2Dmax
if _drop > 0:
diff = abs(div-_drop)
diff_postSplit = abs(div_postSplit-_drop)
if diff_postSplit <= (diff*1.1):
# possibilities.append({'state':'div-pre-split', 'diff':diff_postSplit})
div_status_df.loc[dt, 'div_pre_split'] = True
for k,v in div_status.items():
if k not in div_status_df:
if isinstance(v, (bool, np.bool_)):
@@ -1828,7 +1899,7 @@ class PriceHistory:
if 'div_date_wrong' in cluster.columns and (cluster[c] == cluster['div_date_wrong']).all():
continue
if 'adj_exceeds_prices' in cluster.columns and (cluster[c] == cluster['adj_exceeds_prices']).all():
if 'adj_exceeds_prices' in cluster.columns and (cluster[c] == (cluster[c] & cluster['adj_exceeds_prices'])).all():
# More likely that true-positive. Maybe the div never happened
continue
@@ -1852,6 +1923,11 @@ class PriceHistory:
if pct_fail >= true_threshold:
div_status_df.loc[fc, c] = True
if 'div_date_wrong' in div_status_df.columns:
# reset this as well
div_status_df.loc[fc, 'div_date_wrong'] = False
div_status_df.loc[fc, 'div_true_date'] = pd.NaT
cluster = div_status_df[fc].sort_index()
continue
elif pct_fail <= fals_threshold:
div_status_df.loc[fc, c] = False
@@ -1937,6 +2013,11 @@ class PriceHistory:
div_too_big = False
cluster.loc[dt, 'div_too_big'] = False
n_failed_checks -= 1
if div_exceeds_adj:
# false-positive
div_exceeds_adj = False
cluster.loc[dt, 'div_exceeds_adj'] = False
n_failed_checks -= 1
if div_pre_split:
if adj_exceeds_prices:

View File

@@ -0,0 +1,4 @@
from .screener import Screener
from .screener_query import EquityQuery
__all__ = ['EquityQuery', 'Screener']

View File

@@ -0,0 +1,105 @@
from typing import Dict
from yfinance import utils
from yfinance.data import YfData
from yfinance.const import _BASE_URL_, PREDEFINED_SCREENER_BODY_MAP
from .screener_query import Query
_SCREENER_URL_ = f"{_BASE_URL_}/v1/finance/screener"
class Screener:
def __init__(self, session=None, proxy=None):
self.proxy = proxy
self.session = session
self._data: YfData = YfData(session=session)
self._body: Dict = {}
self._response: Dict = {}
self._body_updated = False
self._accepted_body_keys = {"offset","size","sortField","sortType","quoteType","query","userId","userIdType"}
self._predefined_bodies = PREDEFINED_SCREENER_BODY_MAP.keys()
@property
def body(self) -> Dict:
return self._body
@property
def response(self) -> Dict:
if self._body_updated or self._response is None:
self._fetch_and_parse()
self._body_updated = False
return self._response
@property
def predefined_bodies(self) -> Dict:
return self._predefined_bodies
def set_default_body(self, query: Query, offset: int = 0, size: int = 100, sortField: str = "ticker", sortType: str = "desc", quoteType: str = "equity", userId: str = "", userIdType: str = "guid") -> None:
self._body_updated = True
self._body = {
"offset": offset,
"size": size,
"sortField": sortField,
"sortType": sortType,
"quoteType": quoteType,
"query": query.to_dict(),
"userId": userId,
"userIdType": userIdType
}
def set_predefined_body(self, k: str) -> None:
body = PREDEFINED_SCREENER_BODY_MAP.get(k, None)
if not body:
raise ValueError(f'Invalid key {k} provided for predefined screener')
self._body_updated = True
self._body = body
def set_body(self, body: Dict) -> None:
missing_keys = [key for key in self._accepted_body_keys if key not in body]
if missing_keys:
raise ValueError(f"Missing required keys in body: {missing_keys}")
extra_keys = [key for key in body if key not in self._accepted_body_keys]
if extra_keys:
raise ValueError(f"Body contains extra keys: {extra_keys}")
self._body_updated = True
self._body = body
def patch_body(self, values: Dict) -> None:
extra_keys = [key for key in values if key not in self._accepted_body_keys]
if extra_keys:
raise ValueError(f"Body contains extra keys: {extra_keys}")
self._body_updated = True
for k in values:
self._body[k] = values[k]
def _validate_body(self) -> None:
if not all(k in self._body for k in self._accepted_body_keys):
raise ValueError("Missing required keys in body")
def _fetch(self) -> Dict:
params_dict = {"corsDomain": "finance.yahoo.com", "formatted": "false", "lang": "en-US", "region": "US"}
response = self._data.post(_SCREENER_URL_, body=self.body, user_agent_headers=self._data.user_agent_headers, params=params_dict, proxy=self.proxy)
response.raise_for_status()
return response.json()
def _fetch_and_parse(self) -> None:
response = None
self._validate_body()
try:
response = self._fetch()
self._response = response['finance']['result'][0]
except Exception as e:
logger = utils.get_yf_logger()
logger.error(f"Failed to get screener data for '{self._body.get('query', "query not set")}' reason: {e}")
logger.debug("Got response: ")
logger.debug("-------------")
logger.debug(f" {response}")
logger.debug("-------------")

View File

@@ -0,0 +1,86 @@
import numbers
from typing import List, Union, Dict, Set
from yfinance.const import EQUITY_SCREENER_EQ_MAP, EQUITY_SCREENER_FIELDS
from yfinance.exceptions import YFNotImplementedError
class Query:
def __init__(self, operator: str, operand: Union[numbers.Real, str, List['Query']]):
self.operator = operator
self.operands = operand
def to_dict(self) -> Dict:
raise YFNotImplementedError('to_dict() needs to be implemented by children classes')
class EquityQuery(Query):
def __init__(self, operator: str, operand: Union[numbers.Real, str, List['EquityQuery']]):
operator = operator.upper()
if not isinstance(operand, list):
raise TypeError('Invalid operand type')
if len(operand) <= 0:
raise ValueError('Invalid field for Screener')
if operator in {'OR','AND'}:
self._validate_or_and_operand(operand)
elif operator == 'EQ':
self._validate_eq_operand(operand)
elif operator == 'BTWN':
self._validate_btwn_operand(operand)
elif operator in {'GT','LT'}:
self._validate_gt_lt(operand)
else:
raise ValueError('Invalid Operator Value')
self.operator = operator
self.operands = operand
self._valid_eq_map = EQUITY_SCREENER_EQ_MAP
self._valid_fields = EQUITY_SCREENER_FIELDS
@property
def valid_eq_map(self) -> Dict:
return self._valid_eq_map
@property
def valid_fields(self) -> Set:
return self._valid_fields
def _validate_or_and_operand(self, operand: List['EquityQuery']) -> None:
if len(operand) <= 1:
raise ValueError('Operand must be length longer than 1')
if all(isinstance(e, EquityQuery) for e in operand) is False:
raise TypeError('Operand must be type EquityQuery for OR/AND')
def _validate_eq_operand(self, operand: List[Union[str, numbers.Real]]) -> None:
if len(operand) != 2:
raise ValueError('Operand must be length 2 for EQ')
if operand[0] not in EQUITY_SCREENER_FIELDS:
raise ValueError('Invalid field for Screener')
if operand[0] not in EQUITY_SCREENER_EQ_MAP:
raise ValueError('Invalid EQ key')
if operand[1] not in EQUITY_SCREENER_EQ_MAP[operand[0]]:
raise ValueError('Invalid EQ value')
def _validate_btwn_operand(self, operand: List[Union[str, numbers.Real]]) -> None:
if len(operand) != 3:
raise ValueError('Operand must be length 3 for BTWN')
if operand[0] not in EQUITY_SCREENER_FIELDS:
raise ValueError('Invalid field for Screener')
if isinstance(operand[1], numbers.Real) is False:
raise TypeError('Invalid comparison type for BTWN')
if isinstance(operand[2], numbers.Real) is False:
raise TypeError('Invalid comparison type for BTWN')
def _validate_gt_lt(self, operand: List[Union[str, numbers.Real]]) -> None:
if len(operand) != 2:
raise ValueError('Operand must be length 2 for GT/LT')
if operand[0] not in EQUITY_SCREENER_FIELDS:
raise ValueError('Invalid field for Screener')
if isinstance(operand[1], numbers.Real) is False:
raise TypeError('Invalid comparison type for GT/LT')
def to_dict(self) -> Dict:
return {
"operator": self.operator,
"operands": [operand.to_dict() if isinstance(operand, EquityQuery) else operand for operand in self.operands]
}

View File

@@ -1 +1 @@
version = "0.2.44"
version = "0.2.45"