File Download

There are no files associated with this item.

Supplementary

Conference Paper: How Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models

TitleHow Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models
Authors
Issue Date30-Apr-2025
Persistent Identifierhttp://hdl.handle.net/10722/359596

 

DC FieldValueLanguage
dc.contributor.authorWu, Chuan-
dc.date.accessioned2025-09-08T00:30:24Z-
dc.date.available2025-09-08T00:30:24Z-
dc.date.issued2025-04-30-
dc.identifier.urihttp://hdl.handle.net/10722/359596-
dc.languageeng-
dc.relation.ispartofthe 2025 Annual Conference of the Nations of the Americas Chapter of the Associa- tion for Computational Linguistics (NAACL) (29/04/2025-04/05/2025, Albuquerque, New Mexico)-
dc.titleHow Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats