We focus on constrained, L-smooth, nonconvex-nonconcave min-max problems either satisfying rho-cohypomonotonicity or admitting a solution to the rho-weakly Minty Variational Inequality (MVI), where larger values of the parameter rho>0 correspond to a greater degree of nonconvexity. Relevant problem classes include two player reinforcement learning and interaction dominant min-max problems. It has been conjectured that first-order methods can tolerate values of
rho no larger than 1/L, but results until now have stagnated at the tighter requirement rho<0.5/L. We obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for rho<1/L, using inexact variants of Halpern and Krasnoselskii-Mann (KM) iterations. We also provide algorithms and complexity guarantees in the stochastic case with the same range on rho. Our improvements come from harnessing the recently proposed "conic nonexpansiveness" property of operators. Finally, we provide a refined analysis for inexact Halpern iteration and propose a stochastic KM iteration with a multilevel Monte Carlo estimator.