A final and crucial aspect of text summarization research is the evaluation and feedback of the generated summaries. Evaluation methods can be classified into intrinsic and extrinsic methods. Intrinsic methods measure the quality of the summaries based on some criteria or metrics, such as relevance, coherence, readability, or informativeness. These methods can be further divided into automatic and human methods. Automatic methods compare the summaries with some reference summaries, such as ROUGE or BLEU, or use some neural network models, such as BERTScore or BARTScore. Human methods rely on the judgments or ratings of human evaluators, such as experts or crowd workers. Extrinsic methods measure the usefulness or impact of the summaries for some downstream tasks or applications, such as question answering, sentiment analysis, or decision making. These methods can also involve automatic or human evaluation, depending on the task or application. Feedback methods aim to improve the quality of the summaries by providing some guidance or correction to the text summarization models, such as reinforcement learning, active learning, or interactive learning.