Empty pandas dataframe retaining data types
I want to create an empty df as a template with reserved data types. code show as below:
import pandas as pd import datetime from dataclasses import dataclass @dataclass class openorder: symbol: str = "dummy" sectype: str = "stk" dt: datetime.datetime = datetime.datetime.now() price: float = 0.0 status: str = none def empty(self): open_ord = self() empty_df = pd.dataframe([open_ord.__dict__]) return empty_df.iloc[0:0]
Instantiation is valid, but clearing is invalid.
open_order = OpenOrder() order_df = open_order.empty()
How can I do this?
Correct answer
You cannot call self()
because self
is a reference to an object, not to Class reference. Just use
def empty(self): empty_df = pd.DataFrame([self.__dict__]) return empty_df.iloc[0:0]
The above is the detailed content of Empty pandas dataframe retaining data types. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

Fastapi ...

Using python in Linux terminal...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

About Pythonasyncio...

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...

Loading pickle file in Python 3.6 environment error: ModuleNotFoundError:Nomodulenamed...

Discussion on the reasons why pipeline files cannot be written when using Scapy crawlers When learning and using Scapy crawlers for persistent data storage, you may encounter pipeline files...
