АВТОМАТИЗОВАНА СИСТЕМА ВІДБОРУ ОЗНАК ДЛЯ КОМП’ЮТЕРНО-ІНТЕГРОВАНОГО ПРОГНОЗУВАННЯ УСПІШНОСТІ B2B-ЗАМОВЛЕНЬ З ВИКОРИСТАННЯМ XGBOOST

Автор(и)

DOI:

https://doi.org/10.32782/3041-2080/2025-5-7

Ключові слова:

автоматизований відбір ознак, градієнтний бустинг, XGBoost, B2B-прогнозування, машинне навчання, зменшення розмірності, AUC-PR

Анотація

Інтенсивний розвиток сектору корпоративної електронної торгівлі створює потребу в автоматизованих системах прогнозування для оптимізації керування ланцюгами постачання та мінімізації фінансових ризиків. Точне прогнозування успішності B2B-замовлень є важливим для ефективного керування ресурсами та оптимізації виробничих процесів підприємства. Сучасні ERP-системи накопичують великі обсяги даних про поведінку клієнтів, що створює можливості для застосування методів машинного навчання у процесах прийняття рішень.Мета дослідження полягає у розробленні оптимальної стратегії автоматизованого відбору ознак для прогнозування успішності B2B-замовлень з використанням алгоритму XGBoost в архітектурі комп’ютерно-інтегрованих систем керування підприємством. Здійснено експериментальне дослідження ефективності шести методів відбору ознак на двох незалежних наборах даних обсягом 86 794 записи з 24 характеристиками замовлень. Порівняно прямий та зворотний відбір за важливістю XGBoost, жадібні прямий та зворотний алгоритми, рекурсивне виключення ознак та алгоритм Boruta. Автоматизована оптимізація гіперпараметрів здійснена засобами фреймворку Optuna з алгоритмом Tree-structured Parzen Estimator. Оцінювання проводилось за метрикою AUC-PR з 5-кратною крос-валідацією для забезпечення статистичної надійності результатів.Жадібні алгоритми забезпечили найвищу ефективність класифікації: прямий відбір досягнув AUC-PR 0,97873, зворотний – 0,97785. Встановлено, що оптимальний набір ознак включає 16 характеристик.Це відповідає зменшенню розмірності на 33 %, що сприяє підвищенню прогностичної якості. Комплексний підхід забезпечив зниження помилкових класифікацій на 2,7–3,7 % порівняно з базовими налаштуваннями та дав змогу ідентифікувати дев’ять критично важливих ознак.Отримані результати створюють методологічну основу для розроблення автоматизованих систем прогнозування в сучасних комп’ютерно-інтегрованих виробничих комплексах.

Посилання

De T. S., Singh P., Patel A.A Machine learning and Empirical Bayesian Approach for Predictive Buying in B2B E-commerce. ICMLSC 2024: 2024 The 8th International Conference on Machine Learning and Soft Computing, Singapore Singapore. New York, NY, USA, 2024. URL: https://doi.org/10.1145/3647750.3647754

Li K. A Sales Prediction Method Based on XGBoost Algorithm Model. BCP Business & Management. 2023. Vol. 36. P. 367–371. URL: https://doi.org/10.54691/bcpbm.v36i.3487

Miroshnychenko S. O. Improvement of the existing functionality for forecasting finished goods inventory at the enterprise warehouses based on the stock forecasted report ERP ODOO. MININGMETALTECH 2024 – THE MINING AND METALS SECTOR: INTEGRATION OF BUSINESS, TECHNOLOGY AND EDUCATION. Volume 2. 2024. P. 48–51. URL: https://doi.org/10.30525/978-9934-26-506-8-133

Heino A. New Product Demand Forecasting in Retail: Applying Machine Learning Techniques to Forecast Demand for New Product Purchasing Decisions: extended abstract of Master of Science Thesis. 2021. 59 p. URL: https://www.semanticscholar.org/paper/Applying-Machine-Learning-Techniques-to-Forecast-Heino/d17599a4ebc651fd29997c4f2f5f10e0b905e9a4.

Chen T., Guestrin C. XGBoost. KDD ‘16: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco California USA. New York, NY, USA, 2016. URL: https://doi.org/10.1145/2939672.2939785

A gradient boosting classifier for purchase intention prediction of online shoppers / Abdullah-All-Tanvir et al. Heliyon. 2023. Vol. 9, no. 4. P. e15163. URL: https://doi.org/10.1016/j.heliyon.2023.e15163

Leevy J. L., Hancock J., Khoshgoftaar T. M. Medicare fraud detection: a comparative study on the effectiveness of one-class and binary classification models. International Journal of Computers and Applications. 2024. P. 1–6. URL: https://doi.org/10.1080/1206212x.2024.2431874

A framework for feature selection through boosting / A. Alsahaf et al. Expert Systems with Applications. 2021. P. 115895. URL: https://doi.org/10.1016/j.eswa.2021.115895

Attack Classification using Machine Learning on UNSW-NB 15 dataset using XGBoost Feature Selection & Ablation Analysis / N. Pansari et al. 2024 IEEE 9th International Conference for Convergence in Technology (I2CT), Pune, India, 5–7 April 2024. 2024. URL: https://doi.org/10.1109/i2ct61223.2024.10543523

Dou Y., Meng W. Predicting Cancer Classification Based on the Improved XGBoost Algorithms. 2024 6th International Conference on Frontier Technologies of Information and Computer (ICFTIC), Qingdao, China, 13–15 December 2024. 2024. P. 1013–1016. URL: https://doi.org/10.1109/icftic64248.2024.10913271

A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction / N. Pudjihartono et al. Frontiers in Bioinformatics. 2022. Vol. 2. URL: https://doi.org/10.3389/fbinf.2022.927312

Cheng X. A Comprehensive Study of Feature Selection Techniques in Machine Learning Models. Insights in Computer, Signals and Systems. 2024. Vol. 1, no. 1. P. 65–78. URL: https://doi.org/10.70088/xpf2b276

Noroozi Z., Orooji A., Erfannia L. Analyzing the impact of feature selection methods on machine learning algorithms for heart disease prediction. Scientific Reports. 2023. Vol. 13, no. 1. URL: https://doi.org/10.1038/s41598-023-49962-w

Welch W.J. Algorithmic complexity: threeNP- hard problems in computational statistics. Journal of Statistical Computation and Simulation. 1982. Vol. 15, no. 1. P. 17–25. URL: https://doi.org/10.1080/00949658208810560

Fong S., Wong R., Vasilakos A. Accelerated PSO Swarm Search Feature Selection for Data Stream Mining Big Data. IEEE Transactions on Services Computing. 2015. P. 1. URL: https://doi.org/10.1109/tsc.2015.2439695

Blum A. L., Langley P. Selection of relevant features and examples in machine learning. Artificial Intelligence. 1997. Vol. 97, no. 1–2. P. 245–271. URL: https://doi.org/10.1016/s0004-3702(97)00063-5

Dash M., Liu H. Feature Selection for Classification. Intelligent Data Analysis. 1997. Vol. 1, no. 3. P. 131–156. URL: https://doi.org/10.3233/ida-1997-1302

Bertsimas D., King A., Mazumder R. Best subset selection via a modern optimization lens. The Annals of Statistics. 2016. Vol. 44, no. 2. P. 813–852. URL: https://doi.org/10.1214/15-aos1388

Selecting critical features for data classification based on machine learning methods / R.-C. Chen et al. Journal of Big Data. 2020. Vol. 7, no. 1. URL: https://doi.org/10.1186/s40537-020-00327-4 (дата звернення: 11.08.2025).

Degenhardt F., Seifert S., Szymczak S. Evaluation of variable selection methods for random forests and omics data sets. Briefings in Bioinformatics. 2017. Vol. 20, no. 2. P. 492–503. URL: https://doi.org/10.1093/bib/ bbx124

Cost-Constrained feature selection in binary classification: adaptations for greedy forward selection and genetic algorithms / R. Jagdhuber et al. BMC Bioinformatics. 2020. Vol. 21, no. 1. URL: https://doi.org/10.1186/s12859-020-3361-9

Saeys Y., Inza I., Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007. Vol. 23, no. 19. P. 2507–2517. URL: https://doi.org/10.1093/bioinformatics/btm344

Koca Y. B., Aktepe E. Evaluation of Missing Data Imputation Methods and PCA Techniques for Machine Learning Models in Breast Cancer Diagnosis Using WBCD. Türk Doğa ve Fen Dergisi. 2024. URL: https://doi.org/10.46810/tdfd.1460871

Arimie C. O., Biu E. O., Ijomah M. A. Outlier Detection and Effects on Modeling. OALib. 2020. Vol. 07, no. 09. P. 1–30. URL: https://doi.org/10.4236/oalib.1106619

Shved A. V., Davydenko Y. O. Outlier Detection Technique for Heterogeneous Data Using Trimmed-Mean Robust Estimators. Radio Electronics, Computer Science, Control. 2022. No. 3. P. 50. URL: https://doi.org/10.15588/1607-3274-2022-3-5

Jones P. R. A note on detecting statistical outliers in psychophysical data. Attention, Perception, & Psychophysics. 2019. Vol. 81, no. 5. P. 1189–1196. URL: https://doi.org/10.3758/s13414-019-01726-3

Haanappel C. P., Voor in ‘t holt A. F. Using the interquartile range in infection prevention and control research. Infection Prevention in Practice. 2024. P. 100337. URL: https://doi.org/10.1016/j.infpip.2024.100337

MSCI Global Investable Market Value and Growth Indexes Methodology. Bringing clarity to investment decisions MSCI. URL: https://www.msci.com/eqb/methodology/meth_docs/MSCI_GIMIVGMethod_Feb2021.pdf

Effect of Data Scaling Methods on Machine Learning Algorithms and Model Performance / M. M. Ahsan et al. Technologies. 2021. Vol. 9, no. 3. P. 52. URL: https://doi.org/10.3390/technologies9030052

CyclicalFeatures – 1.8.3. Feature-engine – 1.8.3. URL: https://feature-engine.trainindata.com/en/1.8.x/ user_guide/creation/CyclicalFeatures.html

Categorical Features Transformation with Compact One-Hot Encoder for Fraud Detection in Distributed Environment / I. Ul Haq et al. Communications in Computer and Information Science. Singapore, 2019. P. 69–80. URL: https://doi.org/10.1007/978-981-13-6661-1_6

Notes on Parameter Tuning – xgboost 3.0.4 documentation. XGBoost Documentation – xgboost 3.0.4 documentation. URL: https://xgboost.readthedocs.io/en/stable/tutorials/param_tuning.html#handle-imbalanced-dataset

Särndal C.-E., Swensson B., Wretman J. Model Assisted Survey Sampling (Springer Series in Statistics). Springer, 2003. 694 p.

Shenouda J., Bajwa W. U. A Guide to Computational Reproducibility in Signal Processing and Machine Learning [Tips & Tricks]. IEEE Signal Processing Magazine. 2023. Vol. 40, no. 2. P. 141–151. URL: https://doi.org/ 10.1109/msp.2022.3217659

Florek P., Zagda´nski A. Benchmarking state-of-the-art gradient boosting algorithms for classification. Computer Science. Machine Learning. 2023. P. 1–9. URL: https://doi.org/10.48550/arXiv.2305.17094

GK., C. AD. S., Kumar R N. Enhanced DNS Cybersecurity: Optimization of XGBoost Using Automated Hyperparameter Tuning via Optuna. 2024 4th International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkuru, India, 4–5 December 2024. 2024. P. 1–7. URL: https://doi.org/10.1109/ icmnwc63764.2024.10872404

Comparative analysis of SWAT and SWAT coupled with XGBoost model using Optuna hyperparameter optimization for nutrient simulation: A case study in the Upper Nan River basin, Thailand / C. Pinichka et al. Journal of Environmental Management. 2025. Vol. 388. P. 126053. URL: https://doi.org/10.1016/ j.jenvman.2025.126053

Мірошниченко С. О. Оптимізація гіперпараметрів XGBOOST для інтелектуальних систем прогно- зування B2B-замовлень. Науковий Журнал Метінвест Політехніки. Серія: Технічні науки. 2025. № 4.

XGBoost Parameters – xgboost 3.1.0-dev documentation. XGBoost Documentation – xgboost 3.0.4 documentation. URL: https://xgboost.readthedocs.io/en/latest/parameter.html

Heart Disease Prediction Evaluation of Machine Learning Models with PSO-Optimized K-Fold Cross-Validation / N. Gupta et al. 2024 IEEE Region 10 Symposium (TENSYMP), New Delhi, India, 27–29 September 2024. 2024. P. 1–6. URL: https://doi.org/10.1109/tensymp61132.2024.10752215

Sofaer H. R., Hoeting J. A., Jarnevich C. S. The area under the precision-recall curve as a performance metric for rare binary events. Methods in Ecology and Evolution>. 2019. Vol. 10, no. 4. P. 565–577. URL: https://doi.org/10.1111/2041-210x.13140

Efficient Prequential AUC-PR Computation / D. L. Pereira Gomes et al. 2023 International Conference on Machine Learning and Applications (ICMLA), Jacksonville, FL, USA, 15–17 December 2023. 2023. URL: https://doi.org/10.1109/icmla58977.2023.00335

Khan S. A., Ali Rana Z. Evaluating Performance of Software Defect Prediction Models Using Area Under Precision-Recall Curve (AUC-PR). 2019 2nd International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 18–20 February 2019. 2019. URL: https://doi.org/10.23919/icacs.2019.8689135

The receiver operating characteristic curve accurately assesses imbalanced datasets / E. Richardson et al. Patterns. 2024. P. 100994. URL: https://doi.org/10.1016/j.patter.2024.100994

Poisot T. Guidelines for the prediction of species interactions through binary classification. Methods in Ecology and Evolution. 2023. URL: https://doi.org/10.1111/2041-210x.14071

Python API Reference – XGBoost 3.0.4 documentation. XGBoost Documentation – XGBoost 3.0.4 documentation. URL: https://xgboost.readthedocs.io/en/stable/python/python_api.html

Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods / H. Wang et al. Journal of Big Data. 2024. Vol. 11, no. 1. URL: https://doi.org/10.1186/s40537-024-00905-w

Gan L. XGBoost-Based E-Commerce Customer Loss Prediction. Computational Intelligence and Neuroscience. 2022. Vol. 2022. P. 1–10. URL: https://doi.org/10.1155/2022/1858300

Tool wear prediction based on XGBoost feature selection combined with PSO-BP network / Z. Lin et al. Scientific Reports. 2025. Vol. 15, no. 1. URL: https://doi.org/10.1038/s41598-025-85694-9

Optimizing cancer classification: a hybrid RDO-XGBoost approach for feature selection and predictive insights / A. Yaqoob et al. Cancer Immunology, Immunotherapy. 2024. Vol. 73, no. 12. URL: https://doi.org/10.1007/s00262-024-03843-x

XGBoost-SFS and Double Nested Stacking Ensemble Model for Photovoltaic Power Forecasting under Variable Weather Conditions / B. Zhou et al. Sustainability. 2023. Vol. 15, no. 17. P. 13146. URL: https://doi.org/10.3390/su151713146

Demir S., Sahin E. K. An investigation of feature selection methods for soil liquefaction prediction based on tree-based ensemble algorithms using AdaBoost, gradient boosting, and XGBoost. Neural Computing and Applications. 2022. URL: https://doi.org/10.1007/s00521-022-07856-4

Atias D., Obolski U. Addressing bias in feature importances derived from XGBoost. Response to Br J Anaesth 2025. British Journal of Anaesthesia. 2025. URL: https://doi.org/10.1016/j.bja.2025.01.022

Miller A. J. The Convergence of Efroymson’s Stepwise Regression Algorithm. The American Statistician. 1996. Vol. 50, no. 2. P. 180. URL: https://doi.org/10.2307/2684436

Mansfield E. R., Webster J. T., Gunst R. F. An Analytic Variable Selection Technique for Principal Component Regression. Applied Statistics. 1977. Vol. 26, no. 1. P. 34. URL: https://doi.org/10.2307/2346865

A greedy feature selection algorithm for Big Data of high dimensionality / I. Tsamardinos et al. Machine Learning. 2018. Vol. 108, no. 2. P. 149–202. URL: https://doi.org/10.1007/s10994-018-5748-7

Gokalp O., Tasci E., Ugur A. A novel wrapper feature selection algorithm based on iterated greedy metaheuristic for sentiment classification. Expert Systems with Applications. 2020. Vol. 146. P. 113176. URL: https://doi.org/10.1016/j.eswa.2020.113176

Greedy feature selection for ranking / H. Lai et al. 2011 15th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Laussane, Switzerland, 8–10 June 2011. 2011. URL: https://doi.org/10.1109/cscwd.2011.5960053

Gabbi Reddy K., Mishra D. An effective initialization for Fuzzy PSO with Greedy Forward Selection in feature selection. International Journal of Data Science and Analytics. 2025. URL: https://doi.org/10.1007/s41060-024-00712-9

Soper D. S. Greed Is Good: Rapid Hyperparameter Optimization and Model Selection Using Greedy k-Fold Cross Validation. Electronics. 2021. Vol. 10, no. 16. P. 1973. URL: https://doi.org/10.3390/electronics10161973

Draper N. R. Applied regression analysis. New York: Wiley, 1966. 407 p.

Sequential Backward Feature Selection for Optimizing Permanent Strain Model of Unbound Aggregates / S. O. Aregbesola et al. Case Studies in Construction Materials. 2023. P. e02554. URL: https://doi.org/10.1016/ j.cscm.2023.e02554

Priyatno A. M., Widiyaningtyas T. A Systematic Literature Review: Recursive Feature Elimination Algorithms. JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer). 2024. Vol. 9, no. 2. P. 196–207. URL: https://doi.org/10.33480/jitk.v9i2.5015

Gene Selection for Cancer Classification using Support Vector Machines / I. Guyon et al. Machine Learning. 2002. Vol. 46, no. 1/3. P. 389–422. URL: https://doi.org/10.1023/a:1012487302797

RFE. scikit-learn. URL: https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection. RFE.html

Öznacar T., Güler T. Prediction of Early Diagnosis in Ovarian Cancer Patients Using Machine Learning Approaches with Boruta and Advanced Feature Selection. Life. 2025. Vol. 15, no. 4. P. 594. URL: https://doi.org/10.3390/life15040594

Awad M., Fraihat S. Recursive Feature Elimination with Cross-Validation with Decision Tree: Feature Selection Method for Machine Learning-Based Intrusion Detection Systems. Journal of Sensor and Actuator Networks. 2023. Vol. 12, no. 5. P. 67. URL: https://doi.org/10.3390/jsan12050067

Kursa M. B., Rudnicki W. R. Feature Selection with the Boruta Package. Journal of Statistical Software. 2010. Vol. 36, no. 11. URL: https://doi.org/10.18637/jss.v036.i11

Boruta 0.4.3. PyPI The Python Package Index. URL: https://pypi.org/project/Boruta/

Tang R., Zhang X. CART Decision Tree Combined with Boruta Feature Selection for Medical Data Classification. 2020 5th IEEE International Conference on Big Data Analytics (ICBDA), Xiamen, China, 8–11 May 2020. 2020. URL: https://doi.org/10.1109/icbda49040.2020.9101199

Zhou H., Xin Y., Li S. A diabetes prediction model based on Boruta feature selection and ensemble learning. BMC Bioinformatics. 2023. Vol. 24, no. 1. URL: https://doi.org/10.1186/s12859-023-05300-5

Computed tomography radiomics for the prediction of thymic epithelial tumor histology, TNM stage and myasthenia gravis / C. Blüthgen et al. PLOS ONE. 2021. Vol. 16, no. 12. P. e0261401. URL: https://doi.org/ 10.1371/journal.pone.0261401

Anusha Hegde H., Bhowmik B. Feature Selection for Peer-to-Peer Lending Default Risk Using Boruta and mRMR Approach. 2023 IEEE 20th India Council International Conference (INDICON), Hyderabad, India, 14–17 December 2023. 2023. URL: https://doi.org/10.1109/indicon59947.2023.10440917

XGBoost-B-GHM: An Ensemble Model with Feature Selection and GHM Loss Function Optimization for Credit Scoring / Y. Xia et al. Systems. 2024. Vol. 12, no. 7. P. 254. URL: https://doi.org/10.3390/systems12070254

Utilising Explainable Techniques for Quality Prediction in a Complex Textiles Manufacturing Use Case / B. Forsberg et al. 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), Bari, Italy, 28 August – 1 September 2024. 2024. P. 245–251. URL: https://doi.org/10.1109/case59546.2024.10711357

Мірошниченко С. О., Вовна О. В. Багатофакторний статистичний аналіз успішності замовлень як інструмент оптимізації автоматизованих систем керування ресурсами. Вісник Приазовського державного технічного університету. Серія: Технічні науки. 2025. № 50. C. 182–190. URL: https://doi.org/10.31498/2225-6733.50.2025.336372

Multi-objective hyperparameter tuning and feature selection using filter ensembles / M. Binder et al. GECCO ‘20: Genetic and Evolutionary Computation Conference, Cancún Mexico. New York, NY, USA, 2020. URL: https://doi.org/10.1145/3377930.3389815

Lorraine J., Duvenaud D. Stochastic Hyperparameter Optimization through Hypernetworks. Computer Science. Machine Learning. 2018. P. 1–9. URL: https://doi.org/10.48550/arXiv.1802.09419

Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering / Y. Wen et al. KDD ‘23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach CA USA. New York, NY, USA, 2023. URL: https://doi.org/10.1145/3580305.3599322

Gradient boosted feature selection / Z. Xu et al. KDD ‘14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York New York USA. New York, NY, USA, 2014. URL: https://doi.org/10.1145/2623330.2623635

Shakshi, Kulhare R. Gradient Boosting Feature Selection with Machine Learning Classifier for Prediction of Intrusion Image. 2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 13–15 November 2024. 2024. P. 1214–1219. URL: https://doi.org/10.1109/ictacs62700.2024.10840506

Optimizing Software Defect Prediction Models: Integrating Hybrid Grey Wolf and Particle Swarm Optimization for Enhanced Feature Selection with Popular Gradient Boosting Algorithm / Angga Maulana Akbar et al. Journal of Electronics, Electromedical Engineering, and Medical Informatics. 2024. Vol. 6, no. 2. P. 169–181. URL: https://doi.org/10.35882/jeeemi.v6i2.388

Performance Analysis of Feature Selection Methods in Software Defect Prediction: A Search Method Approach / A. O. Balogun et al. Applied Sciences. 2019. Vol. 9, no. 13. P. 2764. URL: https://doi.org/10.3390/app9132764

Impact of Feature Selection Methods on the Predictive Performance of Software Defect Prediction Models: An Extensive Empirical Study / A. O. Balogun et al. Symmetry. 2020. Vol. 12, no. 7. P. 1147. URL: https://doi.org/10.3390/sym12071147

Manipulating and Measuring Model Interpretability / F. Poursabzi-Sangdeh et al. CHI ‘21: CHI Conference on Human Factors in Computing Systems, Yokohama Japan. New York, NY, USA, 2021. URL: https://doi.org/10.1145/3411764.3445315

Multiobjective Evolutionary Feature Selection for Fuzzy Classification / F. Jimenez et al. IEEE Transactions on Fuzzy Systems. 2019. Vol. 27, no. 5. P. 1085–1099. URL: https://doi.org/10.1109/tfuzz.2019.2892363

Exploring local interpretability in dimensionality reduction: Analysis and use cases / N. Mylonas et al. Expert Systems with Applications. 2024. P. 124074. URL: https://doi.org/10.1016/j.eswa.2024.124074

##submission.downloads##

Опубліковано

2025-11-10

Номер

Розділ

АВТОМАТИЗАЦІЯ, КОМП’ЮТЕРНО-ІНТЕГРОВАНІ ТЕХНОЛОГІЇ ТА РОБОТОТЕХНІКА