NTO は、Salesforce 組織のカスタム オブジェクトから毎日 5,000 万件のレコードを抽出する必要があります。NTO は、これらのレコードの抽出中にクエリ タイムアウトの問題に直面しています。
タイムアウトの問題を回避するには、データ アーキテクトは何を推奨する必要がありますか?
正解:C
The best solution to extract 50 million records from a custom object everyday from Salesforce org without facing query timeout issues is to use an ETL tool for extraction of records. ETL stands for extract, transform, and load, and it refers to a process of moving data from one system to another. An ETL tool is a software application that can connect to various data sources, perform data transformations, and load data into a target destination. ETL tools can handle large volumes of data efficiently and reliably, and they often provide features such as scheduling, monitoring, error handling, and logging5. Using a custom auto number and formula field and use that to chunk records while extracting data is a possible workaround, but it requires creating additional fields and writing complex queries. The REST API can extract data as it automatically chunks records by 200, but it has some limitations, such as a maximum of 50 million records per query job6. Asking SF support to increase the query timeout value is not feasible because query timeout values are not configurable
最新のコメント (最新のコメントはトップにあります。)
Aだと思います。