The competitions I participated in:

I worked in a team of 3 with 2 of my coworkers from Deloite. Harrison Jones (HJ) and Guillame Bertrand (GB), each working on our own scripts, and ensembling our models for the final set of predictions. We scored in the 86th percentile, below one of the public collaboration solutions.

There are two main kernels that were used, one for prediction, and one for Bayesian parameter optimization.

It was a very interesting problem, as the classes of data were very unbalanced, about 13/1 in favour of the non-default category. The solution we found that worked best was to upsample the default class by a factor of 2 or 3. While there are more advanced methods out there (randomly creating points in the convex subsets of the default cluster, and adding them to the class), due to computation constraints (6h), and RAM of the Kaggle server, we kept it simple.

Together with GB, we created additonal features, beyond what was already in the kernel to further improve on the solution.

The model we used is an LGBM (Light Gradient Boosted Machine). Which is an implementation of tree based algorithms, we in particular used gradient boosted decision trees.