Description
Code used to generate summaries that preserve the moral framing of the original news article. We leverage the zero-shot summarization ability of Large Language Models, shown to produce results on par with human-generated summaries. We compare three language models and five prompting methods. Leveraging the intuition that journalists intentionally use or report moral-laden words in the article text, we propose approaches that first identify moral-laden words in the article (e.g., through Chain-of-Thought or supervised classification) and then guide the language model in preserving such words in the summary.
| Date made available | 30 Jul 2025 |
|---|---|
| Publisher | TU Delft - 4TU.ResearchData |
Cite this
- DataSetCite