LLMapp 2025

The 1st International Workshop on LLM App Store Analysis

Mon 23 - Fri 27 June 2025 Trondheim, Norway

held in conjunction with the ACM International Conference on the Foundations of Software Engineering (FSE 2025)

Tweets by @LLMapp

Introduction

The 1st International Workshop on LLM App Store Analysis (LLMapp 2025), co-located with FSE 2025 at Trondheim, Norway, invites submissions that explore various aspects of large language model (LLM) app stores. This workshop aims to bring together researchers, industry practitioners, and students to discuss the latest trends, challenges, and future directions in LLM app ecosystems.

LLMs are trained on vast amounts of text data, allowing them to perform a wide range of natural language processing tasks. The advent of LLMs has opened up new possibilities for various applications, including chatbots, content generation, language translation, and sentiment analysis. As the capabilities of LLMs continue to expand, there has been a growing interest in making these models accessible to a broader audience. This has led to the emergence of LLM app stores, such as:

These platforms provide a centralized marketplace where users can browse, download, and use LLM-based apps across various domains, such as productivity, education, entertainment, and personal assistance.

Definitions

  • LLM app: A specialized application (app) powered by an LLM, distinct from conventional mobile apps that may incorporate LLM technology. These apps, typically found on platforms like GPT Store, Poe, Coze, and FlowGPT, are specifically designed to harness the advanced capabilities of LLMs for a variety of purposes, tasks, or scenarios.
  • LLM app store: A centralized platform that hosts, curates, and distributes LLM apps, enabling users to discover and access tailored intelligent services.
LLM App Store Diagram

Topics

We welcome submissions on, but not limited to, the following topics:

  • LLM app store architectures and designs
  • LLM app store mining and analysis
  • Security and privacy of LLM apps/stores
  • LLM app development tools and frameworks
  • User feedback & reputation in LLM app stores
  • Quality of LLM apps/stores
  • LLM app recommendation
  • Economic models and monetization strategies
  • Regulatory and compliance issues
  • Case studies and best practices
  • Performance evaluation and optimization
  • Impact of LLM apps on society and industry
  • Vision of LLM apps/stores
  • Tools/datasets for analyzing LLM app stores

Potential Research Questions

  1. How do the distributions of app categories differ across various LLM app stores? What are the most common features or capabilities advertised in LLM app descriptions?
  2. How do internal factors such as an app's advertised features, category, description length, the creator's productivity, instruction complexity, presence of knowledge files and third-party services, and conversation starters influence the popularity of LLM apps across different stores, and to what extent do these elements correlate with or impact an app's success metrics like user ratings, engagement levels, and overall popularity?
  3. What are the differences in user ratings and reviews for functionally similar apps across different LLM app stores? Are there platform-specific trends in user engagement (e.g., conversation counts, and follower numbers) for comparable apps? How do retention rates and user loyalty differ across platforms for apps in the same category? What unique characteristics do the most popular LLM apps exhibit on each specific platform?
  4. How do privacy policies and data handling practices vary across LLM apps and stores? Are there instances of policy violations, and if so, what are the nature, frequency, and implications of these violations across different stores?
  5. What patterns and potential concerns can be identified in the utilization of third-party services across LLM apps? How do these services impact app functionality, user privacy, and overall ecosystem security?
  6. How does the incorporation of knowledge files influence an LLM app's performance and user reception? In what ways do LLM apps with custom knowledge files or instructions differ in performance from their base models across various task categories, considering both potential improvements and limitations?
  7. How do custom temperature settings of FlowGPT and Poe LLM apps affect user interaction and satisfaction?
  8. What are the pros and cons of allowing multiple base models (like FlowGPT and Poe) versus limiting them to a single company's model (like GPT Store)? Does the creator's choice of base model affect an app's popularity and user ratings?
  9. What security vulnerabilities or risks are common among LLM apps and LLM app stores?

The following works provide valuable resources and insights for further exploration:

Important dates

Paper Submission Deadline: Tuesday, February 25, 2025 (AoE)
Notification of Acceptance: Tuesday, March 25, 2025 (AoE)
Camera-Ready Papers Due: Thursday, April 24, 2025 (AoE)

Submission Guidelines

We welcome the following two types of submissions:

  • Position Papers (1-4 pages including references): Well-argued position or work in progress.
  • Research Papers (8 pages including references): Technical research, experience reports, empirical studies, etc.

Requirements

  • All submissions must be original and not under review elsewhere.
  • Submissions must conform to the FSE Format and Submission Guidelines.
  • Papers must be submitted via the HotCRP submission system (https://llmapp25.hotcrp.com/) by February 25, 2025 (AoE).
  • The official publication date of the workshop proceedings is the date the proceedings are made available by ACM. This date may be up to two weeks prior to the first day of FSE 2025. The official publication date affects the deadline for any patent filings related to published work.

Review Process

  • Submissions will be peer-reviewed by at least three members of the program committee.
  • Evaluation criteria include originality, relevance, technical soundness, and clarity of presentation.

Organization

Organizing Committee

  • Haoyu Wang, Huazhong University of Science and Technology, China
  • Yanjie Zhao, Huazhong University of Science and Technology, China
  • John Grundy, Monash University, Australia
  • Xiapu Luo, The Hong Kong Polytechnic University, Hong Kong

Publicity Chair

  • Xinyi Hou, Huazhong University of Science and Technology, China

Technical Program Committee

Under invitation

Contact

If you have any further questions or require assistance, please do not hesitate to contact us at llmappws@gmail.com. We are happy to assist with any inquiries regarding the workshop or your participation.