Clever: A Curated Benchmark for Formally Verified Code Generation
12 Apr 2026 at 6:10am
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean. The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both.
CLEVER: A Curated Benchmark for Formally Verified Code Generation
10 Apr 2026 at 9:00pm
TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean. It requires full formal specs and proofs. No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning.
The Clever Hans Mirage: A Comprehensive Survey on Spurious...
11 Apr 2026 at 5:52pm
This survey on spurious correlations uses the Clever Hans metaphor to motivate the problem, formalizes a group-based setup g=(y,a) with core metrics (worst-group, average-group, bias-conflicting), and explains why models latch onto shortcuts (simplicity bias, training dynamics).
Counterfactual Debiasing for Fact Verification
9 Apr 2026 at 11:17pm
579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models. Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage. In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information.
Evaluating the Robustness of Neural Networks: An Extreme Value...
11 Apr 2026 at 5:52pm
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.
STAIR: Improving Safety Alignment with Introspective Reasoning
11 Apr 2026 at 5:52pm
One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses. Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding.
Submissions | OpenReview
12 Apr 2026 at 11:46am
Promoting openness in scientific communication and the peer-review process
On the Planning Abilities of Large Language Models : A Critical ...
10 Apr 2026 at 3:30pm
While, as we mentioned earlier, there can be thorny ?clever hans? issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn?t suffer from these. We tested this setup on a subset of the failed instances in the one-shot natural language prompt configuration using GPT-4, given its larger context window.
Dual-Model Defense: Safeguarding Diffusion Models from Membership ...
11 Apr 2026 at 5:52pm
Membership inference and memorization is a key challenge with diffusion models. Mitigating such vulnerabilities is hence an important topic. The idea of using an ensemble of model is clever.
WHAT IS THIS? This is an unscreened compilation of results from several search engines. The sites listed are not necessarily recommended by Surfnetkids.com.
12 Apr 2026 at 6:10am
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean. The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both.
CLEVER: A Curated Benchmark for Formally Verified Code Generation
10 Apr 2026 at 9:00pm
TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean. It requires full formal specs and proofs. No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning.
The Clever Hans Mirage: A Comprehensive Survey on Spurious...
11 Apr 2026 at 5:52pm
This survey on spurious correlations uses the Clever Hans metaphor to motivate the problem, formalizes a group-based setup g=(y,a) with core metrics (worst-group, average-group, bias-conflicting), and explains why models latch onto shortcuts (simplicity bias, training dynamics).
Counterfactual Debiasing for Fact Verification
9 Apr 2026 at 11:17pm
579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models. Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage. In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information.
Evaluating the Robustness of Neural Networks: An Extreme Value...
11 Apr 2026 at 5:52pm
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.
STAIR: Improving Safety Alignment with Introspective Reasoning
11 Apr 2026 at 5:52pm
One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses. Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding.
Submissions | OpenReview
12 Apr 2026 at 11:46am
Promoting openness in scientific communication and the peer-review process
On the Planning Abilities of Large Language Models : A Critical ...
10 Apr 2026 at 3:30pm
While, as we mentioned earlier, there can be thorny ?clever hans? issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn?t suffer from these. We tested this setup on a subset of the failed instances in the one-shot natural language prompt configuration using GPT-4, given its larger context window.
Dual-Model Defense: Safeguarding Diffusion Models from Membership ...
11 Apr 2026 at 5:52pm
Membership inference and memorization is a key challenge with diffusion models. Mitigating such vulnerabilities is hence an important topic. The idea of using an ensemble of model is clever.
WHAT IS THIS? This is an unscreened compilation of results from several search engines. The sites listed are not necessarily recommended by Surfnetkids.com.