LLMapp 2025

The 1st International Workshop on LLM App Store Analysis

Mon 23 - Fri 27 June 2025 Trondheim, Norway

held in conjunction with the ACM International Conference on the Foundations of Software Engineering (FSE 2025)

Tweets by @LLMapp

Introduction

The 1st International Workshop on LLM App Store Analysis (LLMapp 2025), co-located with FSE 2025 at Trondheim, Norway, invites submissions that explore various aspects of large language model (LLM) app stores. This workshop aims to bring together researchers, industry practitioners, and students to discuss the latest trends, challenges, and future directions in LLM app ecosystems.

LLMs are trained on vast amounts of text data, allowing them to perform a wide range of natural language processing tasks. The advent of LLMs has opened up new possibilities for various applications, including chatbots, content generation, language translation, and sentiment analysis. As the capabilities of LLMs continue to expand, there has been a growing interest in making these models accessible to a broader audience. This has led to the emergence of LLM app stores, such as:

These platforms provide a centralized marketplace where users can browse, download, and use LLM-based apps across various domains, such as productivity, education, entertainment, and personal assistance.

Definitions

  • LLM app: A specialized application (app) powered by an LLM, distinct from conventional mobile apps that may incorporate LLM technology. These apps, typically found on platforms like GPT Store, Poe, Coze, and FlowGPT, are specifically designed to harness the advanced capabilities of LLMs for a variety of purposes, tasks, or scenarios.
  • LLM app store: A centralized platform that hosts, curates, and distributes LLM apps, enabling users to discover and access tailored intelligent services.
LLM App Store Diagram

Topics

Chatbot

We welcome submissions on, but not limited to, the following topics:

  • LLM app store architectures and designs
  • LLM app store mining and analysis
  • Security and privacy of LLM apps/stores
  • LLM app development tools and frameworks
  • User feedback & reputation in LLM app stores
  • Quality of LLM apps/stores
  • LLM app recommendation
  • Economic models and monetization strategies
  • Regulatory and compliance issues
  • Case studies and best practices
  • Performance evaluation and optimization
  • Impact of LLM apps on society and industry
  • Vision of LLM apps/stores
  • Tools/datasets for analyzing LLM app stores

Potential Research Questions

  1. How do the distributions of app categories differ across various LLM app stores? What are the most common features or capabilities advertised in LLM app descriptions?
  2. How do internal factors such as an app's advertised features, category, description length, the creator's productivity, instruction complexity, presence of knowledge files and third-party services, and conversation starters influence the popularity of LLM apps across different stores, and to what extent do these elements correlate with or impact an app's success metrics like user ratings, engagement levels, and overall popularity?
  3. What are the differences in user ratings and reviews for functionally similar apps across different LLM app stores? Are there platform-specific trends in user engagement (e.g., conversation counts, and follower numbers) for comparable apps? How do retention rates and user loyalty differ across platforms for apps in the same category? What unique characteristics do the most popular LLM apps exhibit on each specific platform?
  4. How do privacy policies and data handling practices vary across LLM apps and stores? Are there instances of policy violations, and if so, what are the nature, frequency, and implications of these violations across different stores?
  5. What patterns and potential concerns can be identified in the utilization of third-party services across LLM apps? How do these services impact app functionality, user privacy, and overall ecosystem security?
  6. How does incorporating knowledge files influence an LLM app's performance and user reception? In what ways do LLM apps with custom knowledge files or instructions differ in performance from their base models across various task categories, considering both potential improvements and limitations?
  7. How do custom temperature settings of FlowGPT and Poe LLM apps affect user interaction and satisfaction?
  8. What are the pros and cons of allowing multiple base models (like FlowGPT and Poe) versus limiting them to a single company's model (like GPT Store)? Does the creator's choice of base model affect an app's popularity and user ratings?
  9. What security vulnerabilities or risks are common among LLM apps and LLM app stores?

Important dates

Paper Submission Deadline: Saturday, March 15, 2025 (AoE)
Notification of Acceptance: Monday, March 31, 2025 (AoE)
Camera-Ready Papers Due: Thursday, April 24, 2025 (AoE)
Workshops are tentatively scheduled for June 26th and 27th, 2025, and at least one author of each accepted paper must register and attend to present their work.

Submission Guidelines

We welcome the following two types of submissions:

  • Position Papers (1-4 pages including references): Well-argued position or work in progress.
  • Research Papers (8 pages including references): Technical research, experience reports, empirical studies, etc.

Requirements

  • Originality: Submissions must be original and not under review elsewhere.
  • Format:
    • Submissions must conform to the FSE Format and Submission Guidelines and use the ACM Primary Article Template (2-column format).
    • For LaTeX, use sample-sigconf.tex and include:
      \documentclass[sigconf,screen,review,anonymous]{acmart}
      \acmBooktitle{Companion Proceedings of the 33rd ACM Symposium on the Foundations of Software Engineering (FSE '25), June 23--27, 2025, Trondheim, Norway}
    • For Word, use the Interim Template (not the New Workflow).
  • Language & File Type: Submissions must be in English and in PDF format.
  • Double-Anonymous Review:
    • Authors must not reveal their identities in the manuscript (e.g., avoid names, affiliations, or acknowledgments).
    • For more details, refer to the Double-Anonymous Review Process.
  • Submission System: Submit via HotCRP by February 25, 2025 (AoE).
  • Publication: Accepted papers will be published in the FSE 2025 companion proceedings. The official publication date is when ACM makes the proceedings available (up to two weeks before the conference).

Review Process

  • Submissions will be peer-reviewed by at least three program committee members.
  • Evaluation criteria include originality, relevance, technical soundness, and clarity of presentation.

Organization

Organizing Committee

  • Haoyu Wang, Huazhong University of Science and Technology, China
  • Yanjie Zhao, Huazhong University of Science and Technology, China
  • John Grundy, Monash University, Australia
  • Xiapu Luo, The Hong Kong Polytechnic University, Hong Kong

Publicity Chair

  • Xinyi Hou, Huazhong University of Science and Technology, China

Technical Program Committee

Contact

If you have any further questions or require assistance, please do not hesitate to contact us at llmappws@gmail.com. We are happy to assist with any inquiries regarding the workshop or your participation.