Product Feedback

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. editing notebooks didn't have so much lag. It is almost unusable.

    Editing notebooks is so slow sometimes, regardless of computer, internet connection, etc. Rebooting doesn't help, more RAM or CPU doesn't help, a different browser doesn't help. It's awful, with 10 or more seconds of lag just typing individual characters or clicking/highlighting different parts of the notebook. HELP!

    12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  2. How to start cluster in trial account?

    Trial account only allows for 4 cores. Yet with the new machines the driver takes 4 and a single worker takes another 4.
    Any ideas on how to start a cluster on the trial account?

    13 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  3. Workspace was more like JupyterHub

    JupyterHubs navigation is very easy to use and does not keep expanding out and closing notebooks, it would be worth considering merging the best of both UI's.

    26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  4. Writing streams back to the mounted Azure Data Lake Store being fully distributed (like reading)

    The fact that you can mount and directly read huge ADLS stream into spark dataframe is great.
    But writing data back to ADLS doesn't really work.
    Spark API saves it into multiple chunks (one per partition, HDFS) but not directly into ADLS but on top of ADLS HDFS. So it is not one distributed stream but many local substreams. Could it be fixed?
    Because right now I have either to:
    a) collect all data to the driver - not scalable
    b) repartition into 1 partition and save it - slow and still file name needs to be clean up
    c)…

    2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Data import / export  ·  Flag idea as inappropriate…  ·  Admin →
  5. Collapse all headings

    It would be great to be able to collapse all headings in a notebook. Upon coming back to a notebook, all are expanded by default (which is fine), but I frequently spend the first minute or two collapsing everything so I can navigate faster.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Allow running empty cells.

    When I want to run multiple cells in a row, I hold shift and press Enter multiple times to run these cells. Unfortunately, the notebook will get stuck on any empty cells. From an organizational or aesthetic perspective, I like adding empty cells to break up the space, but this makes them harder to run. My workaround is to ensure all empty cells have two empty lines instead of one empty line.

    For comparison, Jupyter allows executing empty cells.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  7. Retry button on Run summary page when job has failed

    When a scheduled job run fails, it'd be convenient to have a really easy way to invoke a retry of that job with the same parameters.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  8. Don't require same column order when importing into DWH using sqldwh connector

    At the moment the sqldwh connector expects the columns in the dataframe to be in the same order as in the dwh table. But this is not always the case. E.g.: in the DWH table we have a surrogate ID as the first column in all Dimension tables. But this is autogenerated, so it is not inside of the dataframes. For the sqldwh connector to work we had to use the workaround to put the surrogate ID as the last column in DWH table.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Data import / export  ·  Flag idea as inappropriate…  ·  Admin →
  9. Groups can be owners of jobs. It is ridiculous to have only one person being able to be Owner of a job

    We are a team of 7 people and only 1 person can update jobs.

    {"error_code":"PERMISSION_DENIED","message":"User ABC does not have Owner or Admin permissions on job 37116"

    We can also not add a group as an owner. This is ridiculous and cannot even be considered an idea/feature as it is expected behaviour since Windows for workgroups 3.11.

    Cheers,
    Wilder

    12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Account management  ·  Flag idea as inappropriate…  ·  Admin →
  10. Tabular data editor within a notebook

    I need to manipulate ~50 input fields (currently in a tabular format) such that I can quickly manipulate > run notebook > see result > manipulate data again. I either have to use terribly laid out widget inputs and gets, or re-upload an entire spreadsheet every time I want to see what a tweak in the data set would result in.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  11. GUI for submitted jobs via execution API

    Please develop a GUI inside databricks webapp (eastus2.azuredatabricks.net)to view the details of jobs submitted to a particular cluster with their command id ,execution context ,class name etc.This will help usto identify which all are the jobs under queue state, running state ,finished state and failed state .

    Sample UI attached

    2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Dashboards  ·  Flag idea as inappropriate…  ·  Admin →
  12. 21 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  13. Workspace API should sync cell headers

    When syncing the workspace to Python files using the databricks cli, the cell headers are not included. We're using this feature for Git integration, and since the headers are not included in the sync, they can't be used.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Data import / export  ·  Flag idea as inappropriate…  ·  Admin →
  14. Azure Databricks mount points mounted on cluster level

    As far as I understand this is already an option on AWS. I would like to arrange mount point access on a cluster level instead of on a workspace. This helps me to secure data inside the mount points.

    10 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  15. if the 'cancel' button wasn't so close to 'Spark jobs' arrow

    The cancel button is really close to the "Spark jobs" text and arrow. It is not difficult to imagine a situation during long jobs when you accidentally cancel a job while just trying to see more details

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  16. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  REST API  ·  Flag idea as inappropriate…  ·  Admin →
  17. Add documentation using SHIFT+TAB

    As in Jupyter notebooks, databricks should add a simple shortcut (in iPython SHIFT+TAB) to view the documentation for a specific function. This fuctionality is used a lot to quickly find out what are the required parameters as well as the optional arguments for the function

    50 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  18. cells had a %skip magic

    A way to mark that cells should be skipped instead of executed would be great.

    10 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  19. Night mode

    It would be really nice if there was a night mode for notebooks in databricks. Or databricks could have "theme" selection, so I would be able to write code using the VS Code interface style, for example.

    23 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  20. Job was queued if max concurrent reached.

    I would run my jobs by api, with some parameter.
    However there is a limit of concurrent jobs which is fine, but when hitting it, jobs should not be skipped, but instead queued for execution when jobs running number allow it again.

    22 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  REST API  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base